Sample records for analytical run time

  1. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  2. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  3. Optimal chemotaxis in intermittent migration of animal cells

    NASA Astrophysics Data System (ADS)

    Romanczuk, P.; Salbreux, G.

    2015-04-01

    Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.

  4. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  5. SMC Standard: Evaluation and Test Requirements for Liquid Rocket Engines

    DTIC Science & Technology

    2017-07-26

    Run -Time Trends .................................................................................................... 53 7.2.4 Steady State Analytical...Administration, 2008. 22. M. Singh, J. Vargo, D. Schiffer and J. Dello, “Safe Diagram – A Design and Reliability Tool for Turbine Blading ,” Dresser-Rand...allowed starts and run ‐time including ground acceptance testing, on‐pad firings/aborts, and flight exposure. Part: A single piece (or two or more

  6. [Automated analyzer of enzyme immunoassay].

    PubMed

    Osawa, S

    1995-09-01

    Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.

  7. Simultaneous quantification of fentanyl, sufentanil, cefazolin, doxapram and keto-doxapram in plasma using liquid chromatography - tandem mass spectrometry.

    PubMed

    Flint, Robert B; Bahmany, Soma; van der Nagel, Bart C H; Koch, Birgit C P

    2018-05-16

    A simple and specific UPLC-MS/MS method was developed and validated for simultaneous quantification of fentanyl, sufentanil, cefazolin, doxapram and its active metabolite keto-doxapram. The internal standard was fentanyl-d5 for all analytes. Chromatographic separation was achieved with a reversed phase Acquity UPLC HSS T3 column with a run-time of only 5.0 minutes per injected sample. Gradient elution was performed with a mobile phase consisting of ammonium acetate, formic acid in Milli-Q ultrapure water or in methanol with a total flow rate of 0.4 mL minute -1 . A plasma volume of only 50 μL was required to achieve both adequate accuracy and precision. Calibration curves of all 5 analytes were linear. All analytes were stable for at least 48 hours in the autosampler. The method was validated according to US Food and Drug Administration guidelines. This method allows quantification of fentanyl, sufentanil, cefazolin, doxapram and keto-doxapram, which serves purposes for research, as well as therapeutic drug monitoring, if applicable. The strength of this method is the combination of a small sample volume, a short run-time, a deuterated internal standard, an easy sample preparation method and the ability to simultaneously quantify all analytes in one run. This article is protected by copyright. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  9. New insight into the comparative power of quality-control rules that use control observations within a single analytical run.

    PubMed

    Parvin, C A

    1993-03-01

    The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.

  10. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  11. Real-Time Analytics for the Healthcare Industry: Arrhythmia Detection.

    PubMed

    Agneeswaran, Vijay Srinivas; Mukherjee, Joydeb; Gupta, Ashutosh; Tonpay, Pranay; Tiwari, Jayati; Agarwal, Nitin

    2013-09-01

    It is time for the healthcare industry to move from the era of "analyzing our health history" to the age of "managing the future of our health." In this article, we illustrate the importance of real-time analytics across the healthcare industry by providing a generic mechanism to reengineer traditional analytics expressed in the R programming language into Storm-based real-time analytics code. This is a powerful abstraction, since most data scientists use R to write the analytics and are not clear on how to make the data work in real-time and on high-velocity data. Our paper focuses on the applications necessary to a healthcare analytics scenario, specifically focusing on the importance of electrocardiogram (ECG) monitoring. A physician can use our framework to compare ECG reports by categorization and consequently detect Arrhythmia. The framework can read the ECG signals and uses a machine learning-based categorizer that runs within a Storm environment to compare different ECG signals. The paper also presents some performance studies of the framework to illustrate the throughput and accuracy trade-off in real-time analytics.

  12. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  13. Streaming data analytics via message passing with application to graph algorithms

    DOE PAGES

    Plimpton, Steven J.; Shead, Tim

    2014-05-06

    The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less

  14. Simultaneous determination of 16 brominated flame retardants in food and feed of animal origin by fast gas chromatography coupled to tandem mass spectrometry using atmospheric pressure chemical ionisation.

    PubMed

    Bichon, E; Guiffard, I; Vénisseau, A; Lesquin, E; Vaccher, V; Brosseaud, A; Marchand, P; Le Bizec, B

    2016-08-12

    A gas chromatography tandem mass spectrometry method using atmospheric pressure chemical ionisation was developed for the monitoring of 16 brominated flame retardants (7 usually monitored polybromodiphenylethers (PBDEs) and BDE #209 and 8 additional emerging and novel BFRs) in food and feed of animal origin. The developed analytical method has decreased the run time by three compared to conventional strategies, using a 2.5m column length (5% phenyl stationary phase, 0.1mm i.d., 0.1μmf.t.), a pulsed split injection (1:5) with carrier gas helium flow rate at 0.48mLmin(-1) in one run of 20 min. For most BFRs, analytical data were compared with the current analytical strategy relying on GC/EI/HRMS (double sector, R=10000 at 10% valley). Performances in terms of sensitivity were found to meet the Commission recommendation (118/2014/EC) for nBFRs. GC/APCI/MS/MS represents a promising alternative for multi-BFRs analysis in complex matrices, in that it allows the monitoring of a wider list of contaminants in a single injection and a shorter run time. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

    NASA Astrophysics Data System (ADS)

    Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek

    2018-04-01

    We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

  16. A microfluidic paper-based analytical device for the assay of albumin-corrected fructosamine values from whole blood samples.

    PubMed

    Boonyasit, Yuwadee; Laiwattanapaisal, Wanida

    2015-01-01

    A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.

  17. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    PubMed

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  18. Simple estimation of linear 1+1 D tsunami run-up

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Campos, J. A.; Riquelme, S.

    2016-12-01

    An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.

  19. Analyzing large scale genomic data on the cloud with Sparkhit

    PubMed Central

    Huang, Liren; Krüger, Jan

    2018-01-01

    Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074

  20. Characteristics of process oils from HTI coal/plastics co-liquefaction runs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robbins, G.A.; Brandes, S.D.; Winschel, R.A.

    1995-12-31

    The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.

  1. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    NASA Astrophysics Data System (ADS)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  2. Implications on 1+1 D runup modeling due to time features of the earthquake source

    NASA Astrophysics Data System (ADS)

    Fuentes, M.; Riquelme, S.; Campos, J. A.

    2017-12-01

    The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1+1D solution for the shoreline motion time series, from the static case to the dynamic case, by including both, rise time and rupture velocity. Results show that the static case correspond to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum run-up may be affected by very slow ruptures and long rise time. The analytical solution has been tested for the Nicaraguan tsunami earthquake, suggesting that the rupture was not slow enough to cause wave amplification to explain the high runup observations.

  3. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    PubMed Central

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  4. Bio-analytical method development and validation of Rasagiline by high performance liquid chromatography tandem mass spectrometry detection and its application to pharmacokinetic study

    PubMed Central

    Konda, Ravi Kumar; Chandu, Babu Rao; Challa, B.R.; Kothapalli, Chandrasekhar B.

    2012-01-01

    The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm) column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. PMID:29403764

  5. Rapid Large Earthquake and Run-up Characterization in Quasi Real Time

    NASA Astrophysics Data System (ADS)

    Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.

    2017-12-01

    Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.

  6. Running into Trouble with the Time-Dependent Propagation of a Wavepacket

    ERIC Educational Resources Information Center

    Garriz, Abel E.; Sztrajman, Alejandro; Mitnik, Dario

    2010-01-01

    The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations.…

  7. Cross-Layer Modeling Framework for Energy-Efficient Resilience

    DTIC Science & Technology

    2014-04-01

    functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in

  8. A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes

    PubMed Central

    Ma, Xin; Shen, Jianping

    2017-01-01

    The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094

  9. Very high pressure liquid chromatography using fully porous particles: quantitative analysis of fast gradient separations without post-run times.

    PubMed

    Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois Ann; Guiochon, Georges

    2014-01-10

    Using a column packed with fully porous particles, four methods for controlling the flow rates at which gradient elution runs are conducted in very high pressure liquid chromatography (VHPLC) were tested to determine whether reproducible thermal conditions could be achieved, such that subsequent analyses would proceed at nearly the same initial temperature. In VHPLC high flow rates are achieved, producing fast analyses but requiring high inlet pressures. The combination of high flow rates and high inlet pressures generates local heat, leading to temperature changes in the column. Usually in this case a post-run time is input into the analytical method to allow the return of the column temperature to its initial state. An alternative strategy involves operating the column without a post-run equilibration period and maintaining constant temperature variations for subsequent analysis after conducting one or a few separations to bring the column to a reproducible starting temperature. A liquid chromatography instrument equipped with a pressure controller was used to perform constant pressure and constant flow rate VHPLC separations. Six replicate gradient separations of a nine component mixture consisting of acetophenone, propiophenone, butyrophenone, valerophenone, hexanophenone, heptanophenone, octanophenone, benzophenone, and acetanilide dissolved in water/acetonitrile (65:35, v/v) were performed under various experimental conditions: constant flow rate, two sets of constant pressure, and constant pressure operation with a programmed flow rate. The relative standard deviations of the response factors for all the analytes are lower than 5% across the methods. Programming the flow rate to maintain a fairly constant pressure instead of using instrument controlled constant pressure improves the reproducibility of the retention times by a factor of 5, when plotting the chromatograms in time. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Simultaneous determination of plasma creatinine, uric acid, kynurenine and tryptophan by high-performance liquid chromatography: method validation and in application to the assessment of renal function.

    PubMed

    Zhao, Jianxing

    2015-03-01

    A high-performance liquid chromatography with ultraviolet detection method has been developed for the simultaneous determination of a set of reliable markers of renal function, including creatinine, uric acid, kynurenine and tryptophan in plasma. Separation was achieved by an Agilent HC-C18 (2) analytical column. Gradient elution and programmed wavelength detection allowed the method to be used to analyze these compounds by just one injection. The total run time was 25 min with all peaks of interest being eluted within 13 min. Good linear responses were found with correlation coefficient >0.999 for all analytes within the concentration range of the relevant levels. The recovery was: creatinine, 101 ± 1%; uric acid, 94.9 ± 3.7%; kynurenine, 100 ± 2%; and tryptophan, 92.6 ± 2.9%. Coefficients of variation within-run and between-run of all analytes were ≤2.4%. The limit of detection of the method was: creatinine, 0.1 µmol/L; uric acid, 0.05 µmol/L; kynurenine, 0.02 µmol/L; and tryptophan, 1 µmol/L. The developed method could be employed as a useful tool for the detection of chronic kidney disease, even at an early stage. Copyright © 2014 John Wiley & Sons, Ltd.

  11. FAPT: A Mathematica package for calculations in QCD Fractional Analytic Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Bakulev, Alexander P.; Khandramai, Vyacheslav L.

    2013-01-01

    We provide here all the procedures in Mathematica which are needed for the computation of the analytic images of the strong coupling constant powers in Minkowski (A(s;nf) and Aνglob(s)) and Euclidean (A(Q2;nf) and Aνglob(Q2)) domains at arbitrary energy scales (s and Q2, correspondingly) for both schemes — with fixed number of active flavours nf=3,4,5,6 and the global one with taking into account all heavy-quark thresholds. These singularity-free couplings are inevitable elements of Analytic Perturbation Theory (APT) in QCD, proposed in [10,69,70], and its generalization — Fractional APT, suggested in [42,46,43], needed to apply the APT imperative for renormalization-group improved hadronic observables. Program summaryProgram title: FAPT Catalogue identifier: AENJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1985 No. of bytes in distributed program, including test data, etc.: 1895776 Distribution format: tar.gz Programming language: Mathematica. Computer: Any work-station or PC where Mathematica is running. Operating system: Windows XP, Mathematica (versions 5 and 7). Classification: 11.5. Nature of problem: The values of analytic images A(Q2) and A(s) of the QCD running coupling powers αsν(Q2) in Euclidean and Minkowski regions, correspondingly, are determined through the spectral representation in the QCD Analytic Perturbation Theory (APT). In the program FAPT we collect all relevant formulas and various procedures which allow for a convenient evaluation of A(Q2) and A(s) using numerical integrations of the relevant spectral densities. Solution method: FAPT uses Mathematica functions to calculate different spectral densities and then performs numerical integration of these spectral integrals to obtain analytic images of different objects. Restrictions: It could be that for an unphysical choice of the input parameters the results are without any meaning. Running time: For all operations the run time does not exceed a few seconds. Usually numerical integration is not fast, so that we advise the use of arrays of precalculated data and then to apply the routine Interpolate(as shown in supplied example of the program usage, namely in the notebook FAPT_Interp.nb).

  12. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  13. Streamflow variability and optimal capacity of run-of-river hydropower plants

    NASA Astrophysics Data System (ADS)

    Basso, S.; Botter, G.

    2012-10-01

    The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.

  14. DETERMINATION OF CHLOROPHEONIS, NITROPHENOIS AND METHYLPHENOIS IN GROUND-WATER SAMPLES USING HIGH PERFORMANCE LIQUID CHROMATOGRAPHY

    EPA Science Inventory

    A high performance liquid chromatography (HPLC) method was developed to quantitatively determine phenolic compounds and their isomers in aqueous samples. The HPLC method can analyze a mixture of 15 contaminants in the same analytical run with an analysis time of 25 minutes. The...

  15. DETERMINATION OF CHLOROPHENOLS, NITROPHENOLS, AND METHYLPHENOLS IN GROUND-WATER SAMPLES USING HIGH PERFORMANCE LIQUID CHROMATOGRAPHY

    EPA Science Inventory

    A high performance liquid chromatography (HPLC) method was developed to quantitatively determine phenolic compounds and their isomers in aqueous samples. The HPLC method can analyze a mixture of 15 contaminants in the same analytical run with an analysis time of 25 minutes. The...

  16. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  17. Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.

    2017-12-01

    We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.

  18. Sedimentary Geothermal Feasibility Study: October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad; Zerpa, Luis

    The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less

  19. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  20. Preparation and characterization of lysine-immobilized poly(glycidyl methacrylate) nanoparticle-coated capillary for the separation of amino acids by open tubular capillary electrochromatography.

    PubMed

    Xu, Liang; Cui, Pengfei; Wang, Dongmei; Tang, Cheng; Dong, Linyi; Zhang, Can; Duan, Hongquan; Yang, Victor C

    2014-01-03

    In this study, poly(glycidyl methacrylate) (PGMA) nanoparticles (NPs) were prepared and chemically immobilized for the first time onto a capillary inner wall for open tubular capillary electrochromatography (OTCEC). The immobilization of PGMA NPs onto the capillary was attained by a ring-opening reaction between the NPs and an amino-silylated fused capillary inner surface. Scanning electron micrographs clearly demonstrated that the NPs were bound to the capillary inner surface in a dense monolayer. The PGMA NP-coated column was then functionalized by lysine (Lys). After fuctionalization, the capillary can afford strong anodic electroosmotic flow, especially in acidic running buffers. Separations of three amino acids (including tryptophan, tyrosine and phenylalanine) were performed in NP-modified, monolayer Lys-functionalized and bare uncoated capillaries. Results indicated that the NP-coated column can provide more retention and higher resolution for analytes due to the hydrophobic interaction between analytes and the NP-coating. Run-to-run and column-to-column reproducibilities in the separation of the amino acids using the NP-modified column were also demonstrated. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Experimental evaluation of tool run-out in micro milling

    NASA Astrophysics Data System (ADS)

    Attanasio, Aldo; Ceretti, Elisabetta

    2018-05-01

    This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.

  2. A short note on the mean exit time of the Brownian motion

    NASA Astrophysics Data System (ADS)

    Cadeddu, Lucio; Farina, Maria Antonietta

    We investigate the functional Ω↦ℰ(Ω) where Ω runs through the set of compact domains of fixed volume v in any Riemannian manifold (M,g) and where ℰ(Ω) is the mean exit time from Ω of the Brownian motion. We give an alternative analytical proof of a well-known fact on its critical points proved by McDonald: the critical points of ℰ(Ω) are harmonic domains.

  3. A Simplified Shuttle Payload Thermal Analyzer /SSPTA/ program

    NASA Technical Reports Server (NTRS)

    Bartoszek, J. T.; Huckins, B.; Coyle, M.

    1979-01-01

    A simple thermal analysis program for Space Shuttle payloads has been developed to accommodate the user who requires an easily understood but dependable analytical tool. The thermal analysis program includes several thermal subprograms traditionally employed in spacecraft thermal studies, a data management system for data generated by the subprograms, and a master program to coordinate the data files and thermal subprograms. The language and logic used to run the thermal analysis program are designed for the small user. In addition, analytical and storage techniques which conserve computer time and minimize core requirements are incorporated into the program.

  4. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  5. Analysis of phospholipids in bio-oils and fats by hydrophilic interaction liquid chromatography-tandem mass spectrometry.

    PubMed

    Viidanoja, Jyrki

    2015-09-15

    A new, sensitive and selective liquid chromatography-electrospray ionization-tandem mass spectrometric (LC-ESI-MS/MS) method was developed for the analysis of Phospholipids (PLs) in bio-oils and fats. This analysis employs hydrophilic interaction liquid chromatography-scheduled multiple reaction monitoring (HILIC-sMRM) with a ZIC-cHILIC column. Eight PL class selective internal standards (homologs) were used for the semi-quantification of 14 PL classes for the first time. More than 400 scheduled MRMs were used for the measurement of PLs with a run time of 34min. The method's performance was evaluated for vegetable oil, animal fat and algae oil. The averaged within-run precision and between-run precision were ≤10% for all of the PL classes that had a direct homologue as an internal standard. The method accuracy was generally within 80-120% for the tested PL analytes in all three sample matrices. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Ideal evolution of magnetohydrodynamic turbulence when imposing Taylor-Green symmetries.

    PubMed

    Brachet, M E; Bustamante, M D; Krstulovic, G; Mininni, P D; Pouquet, A; Rosenberg, D

    2013-01-01

    We investigate the ideal and incompressible magnetohydrodynamic (MHD) equations in three space dimensions for the development of potentially singular structures. The methodology consists in implementing the fourfold symmetries of the Taylor-Green vortex generalized to MHD, leading to substantial computer time and memory savings at a given resolution; we also use a regridding method that allows for lower-resolution runs at early times, with no loss of spectral accuracy. One magnetic configuration is examined at an equivalent resolution of 6144(3) points and three different configurations on grids of 4096(3) points. At the highest resolution, two different current and vorticity sheet systems are found to collide, producing two successive accelerations in the development of small scales. At the latest time, a convergence of magnetic field lines to the location of maximum current is probably leading locally to a strong bending and directional variability of such lines. A novel analytical method, based on sharp analysis inequalities, is used to assess the validity of the finite-time singularity scenario. This method allows one to rule out spurious singularities by evaluating the rate at which the logarithmic decrement of the analyticity-strip method goes to zero. The result is that the finite-time singularity scenario cannot be ruled out, and the singularity time could be somewhere between t=2.33 and t=2.70. More robust conclusions will require higher resolution runs and grid-point interpolation measurements of maximum current and vorticity.

  7. Analysis of Investigational Drugs in Biological Fluids. Method Development and Routine Assay. Appendix A.

    DTIC Science & Technology

    1997-02-13

    AMINOPROPIOPHENONE IN DOG PLASMA 0 OCH 3 CH2 CH3 OCH2CHOHCH 2 OH H2 N& p-Aminopropiophenone Guaifenesin (WR 000,302) Internal Standard APPROVALS: This Analytical...40 - 80 il VOLUME: RUN TIME: 14 min (PAPP: 10.7 min; Guaifenesin (Internal Standard): 8.5 min) DETECTOR Wavelength: 316 nm SETTINGS: Absorption Range

  8. Problem-Based Labs and Group Projects in an Introductory University Physics Course

    ERIC Educational Resources Information Center

    Kohnle, Antje; Brown, C. Tom A.; Rae, Cameron F.; Sinclair, Bruce D.

    2012-01-01

    This article describes problem-based labs and analytical and computational project work we have been running at the University of St Andrews in an introductory physics course since 2008/2009. We have found the choice of topics, scaffolding of the process, timing in the year and facilitator guidance decisive for the success of these activities.…

  9. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    PubMed

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  10. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  11. Design of ProjectRun21: a 14-week prospective cohort study of the influence of running experience and running pace on running-related injury in half-marathoners.

    PubMed

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard

    2017-11-06

    Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.

  12. Determination of phenolic constituents of biological interest in red wine by capillary electrophoresis with electrochemical detection.

    PubMed

    Peng, Youyuan; Chu, Qingcui; Liu, Fanghua; Ye, Jiannong

    2004-01-28

    A simultaneous determination of trans-resveratrol, (-)-epicatechin, and (+)-catechin in red wine by capillary electrophoresis with electrochemical detection (CE-ED) is reported. The effects of the potential of the working electrode, pH and concentration of running buffer, separation voltage, and injection time on CE-ED were investigated. Under the optimum conditions, the analytes could be separated in a 100 mmol/L borate buffer (pH 9.2) within 20 min. A 300 microm diameter carbon disk electrode has a good response at +0.85 V (vs SCE) for all analytes. The response was linear over 3 orders of magnitude with detection limit (S/N = 3) ranging from 2 x 10(-7) to 5 x 10(-7) g/mL for all analytes. This method has been used for the determination of these analytes in red wine without enrichment, and the assay result was satisfactory.

  13. Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroads

    ERIC Educational Resources Information Center

    Jones, Kyle M. L.; Salo, Dorothea

    2018-01-01

    In this paper, the authors address learning analytics and the ways academic libraries are beginning to participate in wider institutional learning analytics initiatives. Since there are moral issues associated with learning analytics, the authors consider how data mining practices run counter to ethical principles in the American Library…

  14. Large volume sample stacking of positively chargeable analytes in capillary zone electrophoresis without polarity switching: use of low reversed electroosmotic flow induced by a cationic surfactant at acidic pH.

    PubMed

    Quirino, J P; Terabe, S

    2000-01-01

    A simple and effective way to improve detection sensitivity of positively chargeable analytes in capillary zone electrophoresis more than 100-fold is described. Cationic species were made to migrate toward the cathode even under reversed electroosmotic flow caused by a cationic surfactant by using a low pH run buffer. For the first time, with such a configuration, large volume sample stacking of cationic analytes is achieved without a polarity-switching step and loss of efficiency. Samples are prepared in water or aqueous acetonitrile. Aromatic amines and a variety of drugs were concentrated using background solutions containing phosphoric acid and cetyltrimethylammonium bromide. Qualitative and quantitative aspects are also investigated.

  15. EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Song, Shuaiwen

    With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less

  16. Analytical performance of a bronchial genomic classifier.

    PubMed

    Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean

    2016-02-26

    The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.

  17. Real-time simulation of an automotive gas turbine using the hybrid computer

    NASA Technical Reports Server (NTRS)

    Costakis, W.; Merrill, W. C.

    1984-01-01

    A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.

  18. Tsunami Wave Run-up on a Vertical Wall in Tidal Environment

    NASA Astrophysics Data System (ADS)

    Didenkulova, Ira; Pelinovsky, Efim

    2018-04-01

    We solve analytically a nonlinear problem of shallow water theory for the tsunami wave run-up on a vertical wall in tidal environment. Shown that the tide can be considered static in the process of tsunami wave run-up. In this approximation, it is possible to obtain the exact solution for the run-up height as a function of the incident wave height. This allows us to investigate the tide influence on the run-up characteristics.

  19. Did the ever dead outnumber the living and when? A birth-and-death approach

    NASA Astrophysics Data System (ADS)

    Avan, Jean; Grosjean, Nicolas; Huillet, Thierry

    2015-02-01

    This paper is an attempt to formalize analytically the question raised in 'World Population Explained: Do Dead People Outnumber Living, Or Vice Versa?' Huffington Post, Howard (2012). We start developing simple deterministic Malthusian growth models of the problem (with birth and death rates either constant or time-dependent) before running into both linear birth and death Markov chain models and age-structured models.

  20. A Functional Approach to Reducing Runaway Behavior and Stabilizing Placements for Adolescents in Foster Care

    ERIC Educational Resources Information Center

    Clark, Hewitt B.; Crosland, Kimberly A.; Geller, David; Cripe, Michael; Kenney, Terresa; Neff, Bryon; Dunlap, Glen

    2008-01-01

    Teenagers' running from foster placement is a significant problem in the field of child protection. This article describes a functional, behavior analytic approach to reducing running away through assessing the motivations for running, involving the youth in the assessment process, and implementing interventions to enhance the reinforcing value of…

  1. Experimental and analytical studies on the vibration serviceability of long-span prestressed concrete floor

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Liu, Jiepeng; Li, Jiang; Zhang, Ruizhi

    2018-04-01

    An extensive experimental and theoretical research study was undertaken to study the vibration serviceability of a long-span prestressed concrete floor system to be used in the lounge of a major airport. Specifically, jumping impact tests were carried out to obtain the floor's modal parameters, followed by an analysis of the distribution of peak accelerations. Running tests were also performed to capture the acceleration responses. The prestressed concrete floor was found to have a low fundamental natural frequency (≈ 8.86 Hz) corresponding to the average modal damping ratio of ≈ 2.17%. A coefficients β rp is proposed for convenient calculation of the maximum root-mean-square acceleration for running. In the theoretical analysis, the prestressed concrete floor under running excitation is treated as a two-span continuous anisotropic rectangular plate with simply-supported edges. The calculated analytical results (natural frequencies and root-mean-square acceleration) agree well with the experimental ones. The analytical approach is thus validated.

  2. High Resolution Nature Runs and the Big Data Challenge

    NASA Technical Reports Server (NTRS)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs

  3. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    NASA Astrophysics Data System (ADS)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  4. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy.

    PubMed

    Wahl, N; Hennig, P; Wieser, H P; Bangert, M

    2017-06-26

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.

  5. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  6. An automated real-time free phenytoin assay to replace the obsolete Abbott TDx method.

    PubMed

    Williams, Christopher; Jones, Richard; Akl, Pascale; Blick, Kenneth

    2014-01-01

    Phenytoin is a commonly used anticonvulsant that is highly protein bound with a narrow therapeutic range. The unbound fraction, free phenytoin (FP), is responsible for pharmacologic effects; therefore, it is essential to measure both FP and total serum phenytoin levels. Historically, the Abbott TDx method has been widely used for the measurement of FP and was the method used in our laboratory. However, the FP TDx assay was recently discontinued by the manufacturer, so we had to develop an alternative methodology. We evaluated the Beckman-Coulter DxC800 based FP method for linearity, analytical sensitivity, and precision. The analytical measurement range of the method was 0.41 to 5.30 microg/mL. Within-run and between-run precision studies yielded CVs of 3.8% and 5.5%, respectively. The method compared favorably with the TDx method, yielding the following regression equation: DxC800 = 0.9**TDx + 0.10; r2 = 0.97 (n = 97). The new FP assay appears to be an acceptable alternative to the TDx method.

  7. A surrogate analyte-based liquid chromatography-tandem mass spectrometry method for the determination of endogenous cyclic nucleotides in rat brain.

    PubMed

    Chen, Jie; Tabatabaei, Ali; Zook, Doug; Wang, Yan; Danks, Anne; Stauber, Kathe

    2017-11-30

    A robust high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay was developed and qualified for the measurement of cyclic nucleotides (cNTs) in rat brain tissue. Stable isotopically labeled 3',5'-cyclic adenosine- 13 C 5 monophosphate ( 13 C 5 -cAMP) and 3',5'-cyclic guanosine- 13 C, 15 N 2 monophosphate ( 13 C 15 N 2 -cGMP) were used as surrogate analytes to measure endogenous 3',5'-cyclic adenosine monophosphate (cAMP) and 3',5'-cyclic guanosine monophosphate (cGMP). Pre-weighed frozen rat brain samples were rapidly homogenized in 0.4M perchloric acid at a ratio of 1:4 (w/v). Following internal standard addition and dilution, the resulting extracts were analyzed using negative ion mode electrospray ionization LC-MS/MS. The calibration curves for both analytes ranged from 5 to 2000ng/g and showed excellent linearity (r 2 >0.996). Relative surrogate analyte-to-analyte LC-MS/MS responses were determined to correct concentrations derived from the surrogate curves. The intra-run precision (CV%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was below 6.6% and 7.4%, respectively, while the inter-run precision (CV%) was 8.5% and 5.8%, respectively. The intra-run accuracy (Dev%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was <11.9% and 10.3%, respectively, and the inter-run Dev% was <6.8% and 5.5%, respectively. Qualification experiments demonstrated high analyte recoveries, minimal matrix effects and low autosampler carryover. Acceptable frozen storage, freeze/thaw, benchtop, processed sample and autosampler stability were shown in brain sample homogenates as well as post-processed samples. The method was found to be suitable for the analysis of rat brain tissue cAMP and cGMP levels in preclinical biomarker development studies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  9. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  10. Analytical prediction of the heat transfer from a blood vessel near the skin surface when cooled by a symmetrical cooling strip

    NASA Technical Reports Server (NTRS)

    Chato, J. C.; Shitzer, A.

    1971-01-01

    An analytical method was developed to estimate the amount of heat extracted from an artery running close to the skin surface which is cooled in a symmetrical fashion by a cooling strip. The results indicate that the optimum width of a cooling strip is approximately three times the depth to the centerline of the artery. The heat extracted from an artery with such a strip is about 0.9 w/m-C which is too small to affect significantly the temperature of the blood flow through a main blood vessel, such as the carotid artery. The method is applicable to veins as well.

  11. Structural safety of trams in case of misguidance in a switch

    NASA Astrophysics Data System (ADS)

    Schindler, Christian; Schwickert, Martin; Simonis, Andreas

    2010-08-01

    Tram vehicles mainly operate on street tracks where sometimes misguidance in switches occurs due to unfavourable conditions. Generally, in this situation, the first running gear of the vehicle follows the bend track while the next running gears continue straight ahead. This leads to a constraint that can only be solved if the vehicle's articulation is damaged or the wheel derails. The last-mentioned situation is less critical in terms of safety and costs. Five different tram types, one of them high floor, the rest low floor, were examined analytically. Numerical simulation was used to determine which wheel would be the first to derail and what level of force is needed in the articulation area between two carbodies to make a tram derail. It was shown that with pure analytical simulation, only an idea of which tram type behaves better or worse in such a situation can be gained, while a three-dimensional computational simulation gives more realistic values for the forces that arise. Three of the four low-floor tram types need much higher articulation forces to make a wheel derail in a switch misguidance situation. One particular three-car type with two single-axle running gears underneath the centre car must be designed to withstand nearly three times higher articulation forces than a conventional high-floor articulated tram. Tram designers must be aware of that and should design the carbody accordingly.

  12. Qubits and quantum Hamiltonian computing performances for operating a digital Boolean 1/2-adder

    NASA Astrophysics Data System (ADS)

    Dridi, Ghassen; Faizy Namarvar, Omid; Joachim, Christian

    2018-04-01

    Quantum Boolean (1 + 1) digits 1/2-adders are designed with 3 qubits for the quantum computing (Qubits) and 4 quantum states for the quantum Hamiltonian computing (QHC) approaches. Detailed analytical solutions are provided to analyse the time operation of those different 1/2-adder gates. QHC is more robust to noise than Qubits and requires about the same amount of energy for running its 1/2-adder logical operations. QHC is faster in time than Qubits but its logical output measurement takes longer.

  13. Simultaneous UPLC-MS/MS assay for the detection of the traditional antipsychotics haloperidol, fluphenazine, perphenazine, and thiothixene in serum and plasma.

    PubMed

    Juenke, JoEtta M; Brown, Paul I; Urry, Francis M; Johnson-Davis, Kamisha L; McMillin, Gwendolyn A

    2013-08-23

    Most antipsychotic drugs that are commonly prescribed in the USA are monitored by liquid and gas chromatographic methods. Method performance has been improved using ultra high pressure liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). A rapid and simple procedure for monitoring haloperidol, thiothixene, fluphenazine, and perphenazine is described here. Antipsychotic drug concentrations in serum and plasma were determined by LCMS/MS (Waters Acquity UPLC TQD). The instrument is operated with an ESI interface, in multiple reaction monitoring (MRM), and positive ion mode. The resolution of both quadrupoles was maintained at unit mass with a peak width at half height of 0.7amu. Data analysis was performed using the Waters Quanlynx software. Serum or plasma samples were thawed at room temperature and a 100μL aliquot was placed in a tube. Then 300μL of precipitating reagent (acetonitrile-methanol [50:50, volume: volume]) containing the internal standard (0.12ng/μL Imipramine-D3) was added to each tube. The samples were vortexed and centrifuged. The supernatant was transferred to an autosampler vial and 8μL was injected into the UPLC-MS/MS. Utilizing a Waters Acquity UPLC HSS T3 1.8μm, 2.1×50mm column at 25ºC, the analytes were separated using a timed, linear gradient of acetonitrile and water, each having 0.1% formic acid added. The column is eluted into the LC-MS/MS to detect imipramine D3 at transition 284.25>89.10, haloperidol at 376.18>165.06, thiothixene at 444.27>139.24, fluphenazine at 438.27>171.11, and perphenazine at 404.19>143.07. Secondary transitions for each analyte are also monitored for imipramine D3 at 284.25>193.10, haloperidol at 376.18>122.97, thiothixene at 444.27>97.93, fluphenazine at 438.27>143.08, and perphenazine at 404.19>171.11. The run-time is 1.8min per injection with baseline resolved chromatographic separation. The analytical measurement range was 0.2 to 12.0ng/mL for fluphenazine and perphenazine, and was 1 to 60.0ng/mL for haloperidol and thiothixene. Intra-assay and inter-assay imprecisions (CV) were less than 15% at two concentrations for each analyte. By utilizing a LC-MS/MS method we combined two previously established analytical assays into one, yielding a 75% time-savings on set-up, and a significantly shortened analytical run-time. These changes reduced the turn-around time for analysis and eliminated interference issues resulting in fewer injections and increased column lifetime. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Continuous-time quantum search on balanced trees

    NASA Astrophysics Data System (ADS)

    Philipp, Pascal; Tarrataca, Luís; Boettcher, Stefan

    2016-03-01

    We examine the effect of network heterogeneity on the performance of quantum search algorithms. To this end, we study quantum search on a tree for the oracle Hamiltonian formulation employed by continuous-time quantum walks. We use analytical and numerical arguments to show that the exponent of the asymptotic running time ˜Nβ changes uniformly from β =0.5 to β =1 as the searched-for site is moved from the root of the tree towards the leaves. These results imply that the time complexity of the quantum search algorithm on a balanced tree is closely correlated with certain path-based centrality measures of the searched-for site.

  15. Design, Control and in Situ Visualization of Gas Nitriding Processes

    PubMed Central

    Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy

    2010-01-01

    The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536

  16. Ecological Risk Assessment of Explosive Residues in Rodents, Reptiles, Amphibians, and Fish

    DTIC Science & Technology

    2004-03-01

    oligonucleotide primers were designed according to the sequence for pendrin in Mus musculus . PCR was carried out using a Failsafe kit (Epicentre, WI). PCR...Project No. T9700 PERCHLORATE ANALYTICAL Phase V As a calibration curve is run each time a set of samples is analyzed, we routinely include an... Reset FINAL REPORT FY2002 SERDP Project: ER-1235 TABLE OF CONTENTS Topic Page IDENTIFICATION OF PERCHLORATE-CONTAMINATED AND REFERENCE SITES

  17. Effects of Physical Training in Military Populations: A Meta-Analytic Summary

    DTIC Science & Technology

    2010-10-25

    variation on standard training. The experiment introduced ability group runs, stretching, movement drills, and calisthenics . The calisthenics ...advanced training. The new program combined progressive calisthenics with movement exercises, interval running, and ability-group endurance runs. The new...al. (2004) Modified Calisthenics Program in Advanced Training Outcome Gender g SE ESa zb Sig Sit-ups Men .38 .04 .14 3.45 .000 Women .43

  18. Non-linear structure formation in the `Running FLRW' cosmological model

    NASA Astrophysics Data System (ADS)

    Bibiano, Antonio; Croton, Darren J.

    2016-07-01

    We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.

  19. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  20. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  1. Mobile environment for an emission spectrometer

    NASA Astrophysics Data System (ADS)

    Radziak, Kamil; Litwin, Dariusz; Galas, Jacek; Tyburska-Staniewska, Anna; Ramsza, Andrzej

    2017-08-01

    The paper describes a mobile application to be used in a chemical analytical laboratory. The program running under the control of Android operating system allows for preview of measurements recorded by the emission spectrometer. Another part of the application monitors operational and configuration parameters of the device in real time. The first part of this paper includes an overview of the atomic spectrometry. The second part contains a description of the application and its further potential development direction.

  2. A novel in situ strategy for the preparation of a β-cyclodextrin/polydopamine-coated capillary column for capillary electrochromatography enantioseparations.

    PubMed

    Guo, Heying; Niu, Xiaoying; Pan, Congjie; Yi, Tao; Chen, Hongli; Chen, Xingguo

    2017-06-01

    Inspired by the chiral recognition ability of β-cyclodextrin and the natural adhesive properties of polydopamine under alkaline conditions, in this study, a rapid and in situ modification strategy was developed to fabricate β-cyclodextrin/polydopamine composite material coated-capillary columns for open tubular capillary electrochromatography. The results of scanning electron microscopy, FTIR spectroscopy, streaming potential, and electro-osmotic flow studies indicated that β-cyclodextrin/polydopamine was successfully fixed on the inner wall of the capillary column. This coating can be achieved within 1 h affording a greatly reduced capillary preparation time. The performance of the β-cyclodextrin/polydopamine-coated capillary was validated by the analysis of seven pairs of chiral analytes, namely epinephrine, norepinephrine, isoprenaline, terbutaline, verapamil, tryptophane, carvedilol. Good enantioseparation efficiencies were achieved for all. For three consecutive runs, the relative standard deviations for the migration times of the analytes for intraday, interday, and column-to-column repeatability were in the range of 0.41-1.74, 1.03-4.18, and 1.66-8.24%, respectively. Moreover, the separation efficiency of the β-cyclodextrin/polydopamine-coated capillary column did not decrease obviously over 90 runs. The strategy should also be feasible to introduce and immobilize other chiral selectors on the inner walls surface of capillary columns. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Solar electric geocentric transfer with attitude constraints: Analysis

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Malchow, H. L.; Delbaum, T. N.

    1975-01-01

    A time optimal or nearly time optimal trajectory program was developed for solar electric geocentric transfer with or without attitude constraints and with an optional initial high thrust stage. The method of averaging reduces computation time. A nonsingular set of orbital elements is used. The constraints, which are those of one of the SERT-C designs, introduce complexities into the analysis and the solution yields possible discontinuous changes in thrust direction. The power degradation due to VanAllen radiation is modeled analytically. A wide range of solar cell characteristics is assumed. Effects such as oblateness and shadowing are included. The analysis and the results of many example runs are included.

  4. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  5. Short-run and long-run elasticities of electricity demand in the public sector: A case study of the United States Navy bases

    NASA Astrophysics Data System (ADS)

    Choi, Jino

    Numerous studies have examined the elasticities of electricity demand---residential as well as commercial and industrial---in the private sector. However, no one appears to have examined the behavior of the public sector demand. This study aims to fill that gap and to provide insights into the electricity demand in the public sector, using the U.S. Navy bases as a case study. This study examines electricity demand data of 38 Navy activities within the United States for a 16-year time period from 1985 through 2000. The Navy maintains a highly diverse shore infrastructure to conduct its mission and to support the fleet. The types of shore facilities include shipyards, air stations, aviation depots, hospital, and many others. These Navy activities are analogous to commercial or industrial organizations in the private sector. In this study, I used a number of analytical approaches to estimate short-run and long-run elasticities of electricity demand. Estimation using pooled data was rejected because it failed the test for homogeneity. Estimation using the time series data of each Navy activity had several wrong signs for coefficients. The Stein-rule estimator did not differ significantly from the separate cross-section estimates because of the strong rejection of the homogeneity assumption. The iterative Bayesian shrinkage estimator provided the most reasonable results. The empirical findings from this study are as follows. First, the Navy's electricity demand is price elastic. Second, the price elasticities appear to be lower than those of the private sector. The short-run price elasticities for the Navy activities ranged from -0.083 to -0.157. The long-run price elasticities ranged from -0.151 to -0.769.

  6. Simultaneous determination of phenylethanoid glycosides and aglycones by capillary zone electrophoresis with running buffer modifier.

    PubMed

    Dong, Shuqing; Gao, Ruibin; Yang, Yan; Guo, Mei; Ni, Jingman; Zhao, Liang

    2014-03-15

    Although the separation efficiency of capillary electrophoresis (CE) is much higher than that of other chromatographic methods, it is sometimes difficult to adequately separate the complex ingredients in biological samples. This article describes how one effective and simple way to develop the separation efficiency in CE is to add some modifiers to the running buffer. The suitable running buffer modifier β-cyclodextrin (β-CD) was explored to fast and completely separate four phenylethanoid glycosides and aglycones (homovanillyl alcohol, hydroxytyrosol, 3,4-dimethoxycinnamic acid, and caffeic acid) in Lamiophlomis rotata (Lr) and Cistanche by capillary zone electrophoresis with ultraviolet (UV) detection. It was found that when β-CD was used as running buffer modifier, a baseline separation of the four analytes could be accomplished in less than 20 min and the detection limits were as low as 10(-3) mg L(-1). Other factors affecting the CE separation, such as working potential, pH value and ionic strength of running buffer, separation voltage, and sample injection time, were investigated extensively. Under the optimal conditions, a successful practical application on the determination of Lr and Cistanche samples confirmed the validity and practicability of this method. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Total hydrocarbon content (THC) testing in liquid oxygen (LOX) systems

    NASA Astrophysics Data System (ADS)

    Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.

    2015-12-01

    The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.

  8. HPLC detection of soluble carbohydrates involved in mannitol and trehalose metabolism in the edible mushroom Agaricus bisporus.

    PubMed

    Wannet, W J; Hermans, J H; van Der Drift, C; Op Den Camp, H J

    2000-02-01

    A convenient and sensitive method was developed to separate and detect various types of carbohydrates (polyols, mono- and disaccharides, and phosphorylated sugars) simultaneously using high-performance liquid chromatography (HPLC). The method consists of a chromatographic separation on a CarboPac PA1 anion-exchange analytical column followed by pulsed amperometric detection. In a single run (43 min) 13 carbohydrates were readily resolved. Calibration plots were linear over the ranges of 5-25 microM to 1. 0-1.5 mM. The reliable and fast analysis technique, avoiding derivatization steps and long run times, was used to determine the levels of carbohydrates involved in mannitol and trehalose metabolism in the edible mushroom Agaricus bisporus. Moreover, the method was used to study the trehalose phosphorylase reaction.

  9. Total Hydrocarbon Content (THC) Testing in Liquid Oxygen (LOX)

    NASA Technical Reports Server (NTRS)

    Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.

    2016-01-01

    The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.

  10. RunJumpCode: An Educational Game for Educating Programming

    ERIC Educational Resources Information Center

    Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon

    2017-01-01

    Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…

  11. A rapid estimation of near field tsunami run-up

    USGS Publications Warehouse

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  12. A rapid estimation of tsunami run-up based on finite fault models

    NASA Astrophysics Data System (ADS)

    Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S.

    2014-12-01

    Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases.

  13. Passive Nosetip Technology (PANT) Program. Volume X. Summary of Experimental and Analytical Results

    DTIC Science & Technology

    1975-01-01

    Scallop Calorimeter Data with Sandgrain Type Calorimeter Data 3-22 4-1 Geometry for 1.5-Inch Nose Radius Camphor Model 4-3 4-2 Shape Profile History for... camphor model tested at Re. - 5.104/ft and t - 5 in the NOL hypersonic wind Tunnel Number S. (a) Run 007, Sting 2 -Graphite (b) PANT Run 204 - Camphor ...Laminar region (a) Run 006, Sting 2 -Graphite (b) PANT Run 216 - Camphor low temperature ablator Figure 2-2. Comparison of Transitional Shapes The

  14. A high-throughput, simultaneous analysis of carotenoids, chlorophylls and tocopherol using sub two micron core shell technology columns.

    PubMed

    Chebrolu, Kranthi K; Yousef, Gad G; Park, Ryan; Tanimura, Yoshinori; Brown, Allan F

    2015-09-15

    A high-throughput, robust and reliable method for simultaneous analysis of five carotenoids, four chlorophylls and one tocopherol was developed for rapid screening large sample populations to facilitate molecular biology and plant breeding. Separation was achieved for 10 known analytes and four unknown carotenoids in a significantly reduced run time of 10min. Identity of the 10 analytes was confirmed by their UV-Vis absorption spectras. Quantification of tocopherol, carotenoids and chlorophylls was performed at 290nm, 460nm and 650nm respectively. In this report, two sub two micron particle core-shell columns, Kinetex from Phenomenex (1.7μm particle size, 12% carbon load) and Cortecs from Waters (1.6μm particle size, 6.6% carbon load) were investigated and their separation efficiencies were evaluated. The peak resolutions were >1.5 for all analytes except for chlorophyll-a' with Cortecs column. The ruggedness of this method was evaluated in two identical but separate instruments that produced CV<2 in peak retentions for nine out of 10 analytes separated. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Internal quality control: best practice.

    PubMed

    Kinns, Helen; Pitkin, Sarah; Housley, David; Freedman, Danielle B

    2013-12-01

    There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.

  16. Chromatographic selectivity of poly(alkyl methacrylate-co-divinylbenzene) monolithic columns for polar aromatic compounds by pressure-driven capillary liquid chromatography.

    PubMed

    Lin, Shu-Ling; Wang, Chih-Chieh; Fuh, Ming-Ren

    2016-10-05

    In this study, divinylbenzene (DVB) was used as the cross-linker to prepare alkyl methacrylate (AlMA) monoliths for incorporating π-π interactions between the aromatic analytes and AlMA-DVB monolithic stationary phases in capillary LC analysis. Various AlMA/DVB ratios were investigated to prepare a series of 30% AlMA-DVB monolithic stationary phases in fused-silica capillaries (250-μm i.d.). The physical properties (such as porosity, permeability, and column efficiency) of the synthesized AlMA-DVB monolithic columns were investigated for characterization. Isocratic elution of phenol derivatives was first employed to evaluate the suitability of the prepared AlMA-DVB columns for small molecule separation. The run-to-run (0.16-1.20%, RSD; n = 3) and column-to-column (0.26-2.95%, RSD; n = 3) repeatabilities on retention times were also examined using the selected AlMA-DVB monolithic columns. The π-π interactions between the aromatic ring and the DVB-based stationary phase offered better recognition on polar analytes with aromatic moieties, which resulted in better separation resolution of aromatic analytes on the AlMA-DVB monolithic columns. In order to demonstrate the capability of potential environmental and/or food safety applications, eight phenylurea herbicides with single benzene ring and seven sulfonamide antibiotics with polyaromatic moieties were analyzed using the selected AlMA-DVB monolithic columns. Copyright © 2016. Published by Elsevier B.V.

  17. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  18. Determination of residual cell culture media components by MEKC.

    PubMed

    Zhang, Junge; Chakraborty, Utpal; Foley, Joe P

    2009-11-01

    Folic acid, hypoxanthine, mycophenolic acid, nicotinic acid, riboflavin, and xanthine are widely used as cell culture media components in monoclonal antibody manufacturing. These components are subsequently removed during the downstream purification processes. This article describes a single MEKC method that can simultaneously determine all the listed compounds with acceptable LOD and LOQ. All the analytes were successfully separated by MEKC using running buffer containing 40 mM SDS, 20 mM sodium phosphate, and 20 mM sodium borate at pH 9.0. The MEKC method was compared to the corresponding CZE method using the same running buffer containing no SDS. The effect of SDS concentration on separation, the pH of the running buffer, and the detection wavelength were studied and optimal MEKC conditions were established. Good linearity was obtained with correlation coefficients of more than 0.99 for all analytes. Specificity, accuracy, and precision were also evaluated. The recovery was in the range of 89-112%. The precision results were in the range of 1.7-4.8%. The experimentally determined data demonstrated that the MEKC method is applicable to the determination of the six analytes in in-process samples from monoclonal antibody manufacturing processes.

  19. Benzathine penicillin G: a model for long-term pharmacokinetic comparison of parenteral long-acting formulations.

    PubMed

    Shahbazi, M A; Azimi, K; Hamidi, M

    2013-04-01

      Long-acting intramuscular penicillin G injection is an important product for the management of some severe infections. However, testing the bioequivalence of such long-acting formulations is difficult. Our aim was to undertake such a test using a generic formulation containing 1 200 000 IU of benzathine penicillin G powder and an innovator's product (Retarpen(®) 1·2 million units; Sandoz, Switzerland).   In an open, double-blind, randomized, two-periods, two-group crossover study, 12 healthy male volunteers received both formulations of benzathine penicillin G on two different days with a 5-month washout period between the doses and a sampling period of over 500 h. A simple, sensitive and rapid high-performance liquid chromatography (HPLC)-UV method was developed and validated for determination of penicillin G plasma concentrations and other pharmacokinetic (PK) parameters.   The analytical method used produced linear responses within a wide analyte concentration range with average within-run and between-run variations of below 15% with acceptable recovery, accuracy and sensitivity. The primary PK parameters we used were maximum plasma concentration (Cmax ), time to reach the maximal concentration (Tmax ) and the area under the plasma concentration vs. time curve from time zero to the last sampling time (AUC0→t ) using a standard non-compartmental approach. Based on these parameters, the two formulations were bioequivalent.   We illustrate the bioequivalence testing of a very long-acting product. The data indicate that the generic test formulation and the branded reference formulation were bioequivalent in fasting healthy Iranian male volunteers. © 2013 Blackwell Publishing Ltd.

  20. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  1. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of selected carbamate pesticides in water by high-performance liquid chromatography

    USGS Publications Warehouse

    Werner, S.L.; Johnson, S.M.

    1994-01-01

    As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.

  2. Allocation of Transaction Cost to Market Participants Using an Analytical Method in Deregulated Market

    NASA Astrophysics Data System (ADS)

    Jeyasankari, S.; Jeslin Drusila Nesamalar, J.; Charles Raja, S.; Venkatesh, P.

    2014-04-01

    Transmission cost allocation is one of the major challenges in transmission open access faced by the electric power sector. The purpose of this work is to provide an analytical method for allocating transmission transaction cost in deregulated market. This research work provides a usage based transaction cost allocation method based on line-flow impact factor (LIF) which relates the power flow in each line with respect to transacted power for the given transaction. This method provides the impact of line flows without running iterative power flow solution and is well suited for real time applications. The proposed method is compared with the Newton-Raphson (NR) method of cost allocation on sample six bus and practical Indian utility 69 bus systems by considering multilateral transaction.

  3. Dominant takeover regimes for genetic algorithms

    NASA Technical Reports Server (NTRS)

    Noever, David; Baskaran, Subbiah

    1995-01-01

    The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.

  4. Separation and simultaneous determination of rutin, puerarin, daidzein, esculin and esculetin in medicinal preparations by non-aqueous capillary.

    PubMed

    Li, Cunhong; Chen, Anjia; Chen, Xiaofeng; Chen, Xingguo; Hu, Zhide

    2005-09-01

    A simple method for the simultaneous determination of five bioactive components (rutin, puerarin, daidzein esculin and esculetin) in traditional medicinal preparations by non-aqueous capillary electrophoresis with UV detection has been developed for the first time. A running buffer composed of 15% acetonitrile, 2.5% acetic acid and 90 mM sodium cholate in methanol was found to be the most suitable for this separation. The limits of detection for five analytes were over the range of 0.050-1.216 microg ml(-1). The relative standard deviations (R.S.Ds.) of the migration times and the peak areas of the analytes were in the range of 1.3-2.9% and 2.2-2.7% (intraday), 1.7-1.9% and 2.8-3.6% (interday), respectively. In the tested concentration range, linear relationships (correlation coefficients: 0.9974 for rutin, 0.9976 for puerarin, 0.9981 for daidzein, 0.9972 for esculin and 0.9929 for esculetin) between peak areas and concentrations of the analytes were obtained. This method has been successfully applied to simultaneous determination of the five bioactive components with recoveries over the range of 89.4-107.4%.

  5. Direct analysis in real time mass spectrometry and multivariate data analysis: a novel approach to rapid identification of analytical markers for quality control of traditional Chinese medicine preparation.

    PubMed

    Zeng, Shanshan; Wang, Lu; Chen, Teng; Wang, Yuefei; Mo, Huanbiao; Qu, Haibin

    2012-07-06

    The paper presents a novel strategy to identify analytical markers of traditional Chinese medicine preparation (TCMP) rapidly via direct analysis in real time mass spectrometry (DART-MS). A commonly used TCMP, Danshen injection, was employed as a model. The optimal analysis conditions were achieved by measuring the contribution of various experimental parameters to the mass spectra. Salvianolic acids and saccharides were simultaneously determined within a single 1-min DART-MS run. Furthermore, spectra of Danshen injections supplied by five manufacturers were processed with principal component analysis (PCA). Obvious clustering was observed in the PCA score plot, and candidate markers were recognized from the contribution plots of PCA. The suitability of potential markers was then confirmed by contrasting with the results of traditional analysis methods. Using this strategy, fructose, glucose, sucrose, protocatechuic aldehyde and salvianolic acid A were rapidly identified as the markers of Danshen injections. The combination of DART-MS with PCA provides a reliable approach to the identification of analytical markers for quality control of TCMP. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Development and validation of a fast and simple multi-analyte procedure for quantification of 40 drugs relevant to emergency toxicology using GC-MS and one-point calibration.

    PubMed

    Meyer, Golo M J; Weber, Armin A; Maurer, Hans H

    2014-05-01

    Diagnosis and prognosis of poisonings should be confirmed by comprehensive screening and reliable quantification of xenobiotics, for example by gas chromatography-mass spectrometry (GC-MS) or liquid chromatography-mass spectrometry (LC-MS). The turnaround time should be short enough to have an impact on clinical decisions. In emergency toxicology, quantification using full-scan acquisition is preferable because this allows screening and quantification of expected and unexpected drugs in one run. Therefore, a multi-analyte full-scan GC-MS approach was developed and validated with liquid-liquid extraction and one-point calibration for quantification of 40 drugs relevant to emergency toxicology. Validation showed that 36 drugs could be determined quickly, accurately, and reliably in the range of upper therapeutic to toxic concentrations. Daily one-point calibration with calibrators stored for up to four weeks reduced workload and turn-around time to less than 1 h. In summary, the multi-analyte approach with simple liquid-liquid extraction, GC-MS identification, and quantification over fast one-point calibration could successfully be applied to proficiency tests and real case samples. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Multivariate curve-resolution analysis of pesticides in water samples from liquid chromatographic-diode array data.

    PubMed

    Maggio, Rubén M; Damiani, Patricia C; Olivieri, Alejandro C

    2011-01-30

    Liquid chromatographic-diode array detection data recorded for aqueous mixtures of 11 pesticides show the combined presence of strongly coeluting peaks, distortions in the time dimension between experimental runs, and the presence of potential interferents not modeled by the calibration phase in certain test samples. Due to the complexity of these phenomena, data were processed by a second-order multivariate algorithm based on multivariate curve resolution and alternating least-squares, which allows one to successfully model both the spectral and retention time behavior for all sample constituents. This led to the accurate quantitation of all analytes in a set of validation samples: aldicarb sulfoxide, oxamyl, aldicarb sulfone, methomyl, 3-hydroxy-carbofuran, aldicarb, propoxur, carbofuran, carbaryl, 1-naphthol and methiocarb. Limits of detection in the range 0.1-2 μg mL(-1) were obtained. Additionally, the second-order advantage for several analytes was achieved in samples containing several uncalibrated interferences. The limits of detection for all analytes were decreased by solid phase pre-concentration to values compatible to those officially recommended, i.e., in the order of 5 ng mL(-1). Copyright © 2010 Elsevier B.V. All rights reserved.

  8. A self-powered biosensing device with an integrated hybrid biofuel cell for intermittent monitoring of analytes.

    PubMed

    Majdecka, Dominika; Draminska, Sylwia; Janusek, Dariusz; Krysinski, Paweł; Bilewicz, Renata

    2018-04-15

    In this work, we propose an integrated self-powered sensing system, driven by a hybrid biofuel cell (HBFC) with carbon paper discs coated with multiwalled carbon nanotubes. The sensing system has a biocathode made from laccase or bilirubin oxidase, and the anode is made from a zinc plate. The system includes a dedicated custom-built electronic control unit for the detection of oxygen and catechol analytes, which are central to medical and environmental applications. Both the HBFC and sensors, operate in a mediatorless direct electron transfer mode. The measured characteristics of the HBFC with externally applied resistance included the power-time dependencies under flow cell conditions, the sensors performance (evaluated by cyclic voltammetry), and chronoamperometry. The HBFC is integrated with analytical devices and operating in a pulse mode form long-run monitoring experiments. The HBFC generated sufficient power for wireless data transmission to a local computer. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. An analytical procedure for the determination of aluminum used in antiperspirants on human skin in Franz™ diffusion cell.

    PubMed

    Guillard, Olivier; Fauconneau, Bernard; Favreau, Frédéric; Marrauld, Annie; Pineau, Alain

    2012-04-01

    A local case report of hyperaluminemia (aluminum concentration: 3.88 µmol/L) in a woman using an aluminum-containing antiperspirant for 4 years raises the question of possible transdermal uptake of aluminum salt as a future public health problem. Prior to studying the transdermal uptake of three commercialized cosmetic formulas, an analytical assay of aluminum (Al) in chlorohydrate form (ACH) by Zeeman Electrothermal Atomic Absorption Spectrophotometer (ZEAAS) in a clean room was optimized and validated. This analysis was performed with different media on human skin using a Franz(™) diffusion cell. The detection and quantification limits were set at ≤ 3 µg/L. Precision analysis as within-run (n = 12) and between-run (n = 15-68 days) yield CV ≤ 6%. The high analytic sensitivity (2-3 µg/L) and low variability should allow an in vitro study of the transdermal uptake of ACH.

  10. Quantitative structure-retention relationships applied to development of liquid chromatography gradient-elution method for the separation of sartans.

    PubMed

    Golubović, Jelena; Protić, Ana; Otašević, Biljana; Zečević, Mira

    2016-04-01

    QSRR are mathematically derived relationships between the chromatographic parameters determined for a representative series of analytes in given separation systems and the molecular descriptors accounting for the structural differences among the investigated analytes. Artificial neural network is a technique of data analysis, which sets out to emulate the human brain's way of working. The aim of the present work was to optimize separation of six angiotensin receptor antagonists, so-called sartans: losartan, valsartan, irbesartan, telmisartan, candesartan cilexetil and eprosartan in a gradient-elution HPLC method. For this purpose, ANN as a mathematical tool was used for establishing a QSRR model based on molecular descriptors of sartans and varied instrumental conditions. The optimized model can be further used for prediction of an external congener of sartans and analysis of the influence of the analyte structure, represented through molecular descriptors, on retention behaviour. Molecular descriptors included in modelling were electrostatic, geometrical and quantum-chemical descriptors: connolly solvent excluded volume non-1,4 van der Waals energy, octanol/water distribution coefficient, polarizability, number of proton-donor sites and number of proton-acceptor sites. Varied instrumental conditions were gradient time, buffer pH and buffer molarity. High prediction ability of the optimized network enabled complete separation of the analytes within the run time of 15.5 min under following conditions: gradient time of 12.5 min, buffer pH of 3.95 and buffer molarity of 25 mM. Applied methodology showed the potential to predict retention behaviour of an external analyte with the properties within the training space. Connolly solvent excluded volume, polarizability and number of proton-acceptor sites appeared to be most influential paramateres on retention behaviour of the sartans. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Liquid microjunction surface sampling probe fluid dynamics: Characterization and application of an analyte plug formation operational mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ElNaggar, Mariam S.; Van Berkel, Gary J.

    2011-08-10

    The recently discovered sample plug formation and injection operational mode of a continuous flow, coaxial tube geometry, liquid microjunction surface sampling probe (LMJ-SSP) (J. Am. Soc. Mass Spectrom, 2011) was further characterized and applied for concentration and mixing of analyte extracted from multiple areas on a surface and for nanoliter-scale chemical reactions of sampled material. A transparent LMJ-SSP was constructed and colored analytes were used so that the surface sampling process, plug formation, and the chemical reactions could be visually monitored at the sampling end of the probe before being analyzed by mass spectrometry of the injected sample plug. Injectionmore » plug peak widths were consistent for plug hold times as long as the 8 minute maximum attempted (RSD below 1.5%). Furthermore, integrated injection peak signals were not significantly different for the range of hold times investigated. The ability to extract and completely mix individual samples within a fixed volume at the sampling end of the probe was demonstrated and a linear mass spectral response to the number of equivalent analyte spots sampled was observed. Lastly, using the color and mass changing chemical reduction of the redox dye 2,6-dichlorophenol-indophenol with ascorbic acid, the ability to sample, concentrate, and efficiently run reactions within the same plug volume within the probe was demonstrated.« less

  12. Application of Semi-analytical Satellite Theory orbit propagator to orbit determination for space object catalog maintenance

    NASA Astrophysics Data System (ADS)

    Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke

    2016-05-01

    Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.

  13. What Do We Teach and How Do We Teach It?

    ERIC Educational Resources Information Center

    Shapiro, Marilyn

    Considering that some feminist critics have recently been approaching composition theory from a preconceived feminist perspective, the issue of maintaining an analytical bias while conducting research is once more emerging. By imposing an analytical model on a body of data, scholars run the risk of ignoring conclusions or focusing on those which…

  14. Assays for therapeutic drug monitoring of β-lactam antibiotics: A structured review.

    PubMed

    Carlier, Mieke; Stove, Veronique; Wallis, Steven C; De Waele, Jan J; Verstraete, Alain G; Lipman, Jeffrey; Roberts, Jason A

    2015-10-01

    In some patient groups, including critically ill patients, the pharmacokinetics of β-lactam antibiotics may be profoundly disturbed due to pathophysiological changes in distribution and elimination. Therapeutic drug monitoring (TDM) is a strategy that may help to optimise dosing. The aim of this review was to identify and analyse the published literature on the methods used for β-lactam quantification in TDM programmes. Sixteen reports described methods for the simultaneous determination of three or more β-lactam antibiotics in plasma/serum. Measurement of these antibiotics, due to low frequency of usage relative to some other tests, is generally limited to in-house chromatographic methods coupled to ultraviolet or mass spectrometric detection. Although many published methods state they are fit for TDM, they are inconvenient because of intensive sample preparation and/or long run times. Ideally, methods used for routine TDM should have a short turnaround time (fast run-time and fast sample preparation), a low limit of quantification and a sufficiently high upper limit of quantification. The published assays included a median of 6 analytes [interquartile range (IQR) 4-10], with meropenem and piperacillin being the most frequently measured β-lactam antibiotics. The median run time was 8 min (IQR 5.9-21.3 min). There is also a growing number of methods measuring free concentrations. An assay that measures antibiotics without any sample preparation would be the next step towards real-time monitoring; no such method is currently available. Copyright © 2015 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.

  15. Sequential Injection Chromatography with an Ultra-short Monolithic Column for the Low-Pressure Separation of α-Tocopherol and γ-Oryzanol in Vegetable Oils and Nutrition Supplements.

    PubMed

    Thaithet, Sujitra; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai

    2017-01-01

    A low-pressure separation procedure of α-tocopherol and γ-oryzanol was developed based on a sequential injection chromatography (SIC) system coupled with an ultra-short (5 mm) C-18 monolithic column, as a lower cost and more compact alternative to the HPLC system. A green sample preparation, dilution with a small amount of hexane followed by liquid-liquid extraction with 80% ethanol, was proposed. Very good separation resolution (R s = 3.26), a satisfactory separation time (10 min) and a total run time including column equilibration (16 min) were achieved. The linear working range was found to be 0.4 - 40 μg with R 2 being more than 0.99. The detection limits of both analytes were 0.28 μg with the repeatability within 5% RSD (n = 7). Quantitative analyses of the two analytes in vegetable oil and nutrition supplement samples, using the proposed SIC method, agree well with the results from HPLC.

  16. Health care cost disease as a threat to Iranian aging society.

    PubMed

    Basakha, Mehdi; Yavari, Kazem; Sadeghi, Hosein; Naseri, Alireza

    2014-01-01

    Because of the rapid aging rate, the share of health expenditure in gross domestic product rises irreversibly and increases concern among politicians and the general public. The aim of this study was to examine the accuracy of the Baumol's model of unbalanced growth in Iran over the period 1981-2010. This theoretical-analytical study was conducted in 2012 to investigate the various determinants of ongoing rise in the health expenditures. To this end, an Error Correction Model was derived from the long run cointegrating equation to inquire the veracity of Baumol's theory. Estimating the short run and long run equations by using time series data shows that the rate of increase in health expenditure is aligned with the difference between wage increases in and growth of productivity in the health sector. Besides, results show that both the per capita income and the inflation rate of health care had significant effects on raising the share of health sector in domestic economy. According to rapid population aging and existence of Baumol's cost disease in Iranian health sector, we predict much more rise in health expenditure in a few decades.

  17. Simultaneous dispersive liquid-liquid microextraction derivatisation and gas chromatography mass spectrometry analysis of subcritical water extracts of sweet and sour cherry stems.

    PubMed

    Švarc-Gajić, Jaroslava; Clavijo, Sabrina; Suárez, Ruth; Cvetanović, Aleksandra; Cerdà, Víctor

    2018-03-01

    Cherry stems have been used in traditional medicine mostly for the treatment of urinary tract infections. Extraction with subcritical water, according to its selectivity, efficiency and other aspects, differs substantially from conventional extraction techniques. The complexity of plant subcritical water extracts is due to the ability of subcritical water to extract different chemical classes of different physico-chemical properties and polarities in a single run. In this paper, dispersive liquid-liquid microextraction (DLLME) with simultaneous derivatisation was optimised for the analysis of complex subcritical water extracts of cherry stems to allow simple and rapid preparation prior to gas chromatography-mass spectrometry (GC-MS). After defining optimal extracting and dispersive solvents, the optimised method was used for the identification of compounds belonging to different chemical classes in a single analytical run. The developed sample preparation protocol enabled simultaneous extraction and derivatisation, as well as convenient coupling with GC-MS analysis, reducing the analysis time and number of steps. The applied analytical protocol allowed simple and rapid chemical screening of subcritical water extracts and was used for the comparison of subcritical water extracts of sweet and sour cherry stems. Graphical abstract DLLME GC MS analysis of cherry stem extracts obtained by subcritical water.

  18. Automated Deployment of Advanced Controls and Analytics in Buildings

    NASA Astrophysics Data System (ADS)

    Pritoni, Marco

    Buildings use 40% of primary energy in the US. Recent studies show that developing energy analytics and enhancing control strategies can significantly improve their energy performance. However, the deployment of advanced control software applications has been mostly limited to academic studies. Larger-scale implementations are prevented by the significant engineering time and customization required, due to significant differences among buildings. This study demonstrates how physics-inspired data-driven models can be used to develop portable analytics and control applications for buildings. Specifically, I demonstrate application of these models in all phases of the deployment of advanced controls and analytics in buildings: in the first phase, "Site Preparation and Interface with Legacy Systems" I used models to discover or map relationships among building components, automatically gathering metadata (information about data points) necessary to run the applications. During the second phase: "Application Deployment and Commissioning", models automatically learn system parameters, used for advanced controls and analytics. In the third phase: "Continuous Monitoring and Verification" I utilized models to automatically measure the energy performance of a building that has implemented advanced control strategies. In the conclusions, I discuss future challenges and suggest potential strategies for these innovative control systems to be widely deployed in the market. This dissertation provides useful new tools in terms of procedures, algorithms, and models to facilitate the automation of deployment of advanced controls and analytics and accelerate their wide adoption in buildings.

  19. LC-MS-sMRM Method Development and Validation of Different Classes of Pain Panel Drugs and Analysis of Clinical Urine Samples.

    PubMed

    Athar Masood, M; Veenstra, Timothy D

    2017-08-26

    Urine Drug Testing (UDT) is an important analytical/bio-analytical technique that has inevitably become an integral and vital part of a testing program for diagnostic purposes. This manuscript presents a tailor-made LC-MS/MS quantitative assay method development and validation for a custom group of 33 pain panel drugs and their metabolites belonging to different classes (opiates, opioids, benzodiazepines, illicit, amphetamines, etc.) that are prescribed in pain management and depressant therapies. The LC-MS/MS method incorporates two experiments to enhance the sensitivity of the assay and has a run time of about 7 min. with no prior purification of the samples required and a flow rate of 0.7 mL/min. The method also includes the second stage metabolites for some drugs that belong to different classes but have first stage similar metabolic pathways that will enable to correctly identify the right drug or to flag the drug that might be due to specimen tampering. Some real case examples and difficulties in peak picking were provided with some of the analytes in subject samples. Finally, the method was deliberated with some randomly selected de-identified clinical subject samples, and the data evaluated from "direct dilute and shoot analysis" and after "glucuronide hydrolysis" were compared. This method is now used to run routinely more than 100 clinical subjects samples on a daily basis. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Development of an electrothermal vaporization ICP-MS method and assessment of its applicability to studies of the homogeneity of reference materials.

    PubMed

    Friese, K C; Grobecker, K H; Wätjen, U

    2001-07-01

    A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.

  1. Multivariate analysis of chromatographic retention data as a supplementary means for grouping structurally related compounds.

    PubMed

    Fasoula, S; Zisi, Ch; Sampsonidis, I; Virgiliou, Ch; Theodoridis, G; Gika, H; Nikitas, P; Pappa-Louisi, A

    2015-03-27

    In the present study a series of 45 metabolite standards belonging to four chemically similar metabolite classes (sugars, amino acids, nucleosides and nucleobases, and amines) was subjected to LC analysis on three HILIC columns under 21 different gradient conditions with the aim to explore whether the retention properties of these analytes are determined from the chemical group they belong. Two multivariate techniques, principal component analysis (PCA) and discriminant analysis (DA), were used for statistical evaluation of the chromatographic data and extraction similarities between chemically related compounds. The total variance explained by the first two principal components of PCA was found to be about 98%, whereas both statistical analyses indicated that all analytes are successfully grouped in four clusters of chemical structure based on the retention obtained in four or at least three chromatographic runs, which, however should be performed on two different HILIC columns. Moreover, leave-one-out cross-validation of the above retention data set showed that the chemical group in which an analyte belongs can be 95.6% correctly predicted when the analyte is subjected to LC analysis under the same four or three experimental conditions as the all set of analytes was run beforehand. That, in turn, may assist with disambiguation of analyte identification in complex biological extracts. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Simulating the effects of ground-water withdrawals on streamflow in a precipitation-runoff model

    USGS Publications Warehouse

    Zarriello, Philip J.; Barlow, P.M.; Duda, P.B.

    2004-01-01

    Precipitation-runoff models are used to assess the effects of water use and management alternatives on streamflow. Often, ground-water withdrawals are a major water-use component that affect streamflow, but the ability of surface-water models to simulate ground-water withdrawals is limited. As part of a Hydrologic Simulation Program-FORTRAN (HSPF) precipitation-runoff model developed to analyze the effect of ground-water and surface-water withdrawals on streamflow in the Ipswich River in northeastern Massachusetts, an analytical technique (STRMDEPL) was developed for calculating the effects of pumped wells on streamflow. STRMDEPL is a FORTRAN program based on two analytical solutions that solve equations for ground-water flow to a well completed in a semi-infinite, homogeneous, and isotropic aquifer in direct hydraulic connection to a fully penetrating stream. One analytical method calculates unimpeded flow at the stream-aquifer boundary and the other method calculates the resistance to flow caused by semipervious streambed and streambank material. The principle of superposition is used with these analytical equations to calculate time-varying streamflow depletions due to daily pumping. The HSPF model can readily incorporate streamflow depletions caused by a well or surface-water withdrawal, or by multiple wells or surface-water withdrawals, or both, as a combined time-varying outflow demand from affected channel reaches. These demands are stored as a time series in the Watershed Data Management (WDM) file. This time-series data is read into the model as an external source used to specify flow from the first outflow gate in the reach where these withdrawals are located. Although the STRMDEPL program can be run independently of the HSPF model, an extension was developed to run this program within GenScn, a scenario generator and graphical user interface developed for use with the HSPF model. This extension requires that actual pumping rates for each well be stored in a unique WDM dataset identified by an attribute that associates each well with the model reach from which water is withdrawn. Other attributes identify the type and characteristics of the data. The interface allows users to easily add new pumping wells, delete exiting pumping wells, or change properties of the simulated aquifer or well. Development of this application enhanced the ability of the HSPF model to simulate complex water-use conditions in the Ipswich River Basin. The STRMDEPL program and the GenScn extension provide a valuable tool for water managers to evaluate the effects of pumped wells on streamflow and to test alternative water-use scenarios. Copyright ASCE 2004.

  3. An interview with Murray Jackson by Jan Wiener.

    PubMed

    Jackson, Murray

    2011-04-01

    Murray Jackson was among the early trainees at the Society of Analytical Psychology (SAP) drawn to Jungian ideas during the 1950s when the training was still relatively informal. He was born in Australia where he became a doctor and came to London to study psychiatry with a particular interest in psychosis. He was influenced by Michael Fordham with whom he had an analysis and his four papers, published in the Journal of Analytical Psychology in the early 1960s, contributed significantly to the growing interest in clinical technique, particularly transference, that developed in the Society at that time. Later, he retrained at the British Institute of Psychoanalysis in the Kleinian tradition and was the first consultant at the Maudsley Hospital to run a 10-bed unit for severely mentally ill patients applying psychoanalytic principles. In April 2010, Jan Wiener interviewed Murray Jackson in France, where he now lives in retirement, about his interest and subsequent disappointment in Jungian ideas as well as his involvement with the Society of Analytical Psychology at a particular point in its history. After a brief introduction, the interview is reproduced in full. © 2011, The Society of Analytical Psychology.

  4. Clinical laboratory urine analysis: comparison of the UriSed automated microscopic analyzer and the manual microscopy.

    PubMed

    Ma, Junlong; Wang, Chengbin; Yue, Jiaxin; Li, Mianyang; Zhang, Hongrui; Ma, Xiaojing; Li, Xincui; Xue, Dandan; Qing, Xiaoyan; Wang, Shengjiang; Xiang, Daijun; Cong, Yulong

    2013-01-01

    Several automated urine sediment analyzers have been introduced to clinical laboratories. Automated microscopic pattern recognition is a new technique for urine particle analysis. We evaluated the analytical and diagnostic performance of the UriSed automated microscopic analyzer and compared with manual microscopy for urine sediment analysis. Precision, linearity, carry-over, and method comparison were carried out. A total of 600 urine samples sent for urinalysis were assessed using the UriSed automated microscopic analyzer and manual microscopy. Within-run and between-run precision of the UriSed for red blood cells (RBC) and white blood cells (WBC) were acceptable at all levels (CV < 20%). Within-run and between-run imprecision of the UriSed testing for cast, squamous epithelial cells (EPI), and bacteria (BAC) were good at middle level and high level (CV < 20%). The linearity analysis revealed substantial agreement between the measured value and the theoretical value of the UriSed for RBC, WBC, cast, EPI, and BAC (r > 0.95). There was no carry-over. RBC, WBC, and squamous epithelial cells with sensitivities and specificities were more than 80% in this study. There is substantial agreement between the UriSed automated microscopic analyzer and the manual microscopy methods. The UriSed provides for a rapid turnaround time.

  5. A green method for the quantification of plastics-derived endocrine disruptors in beverages by chemometrics-assisted liquid chromatography with simultaneous diode array and fluorescent detection.

    PubMed

    Vidal, Rocío B Pellegrino; Ibañez, Gabriela A; Escandar, Graciela M

    2016-10-01

    The aim of this study was to develop a novel analytical method for the determination of bisphenol A, nonylphenol, octylphenol, diethyl phthalate, dibutyl phthalate and diethylhexyl phthalate, compounds known for their endocrine-disruptor properties, based on liquid chromatography with simultaneous diode array and fluorescent detection. Following the principles of green analytical chemistry, solvent consumption and chromatographic run time were minimized. To deal with the resulting incomplete resolution in the chromatograms, a second-order calibration was proposed. Second-order data (elution time-absorbance wavelength and elution time-fluorescence emission wavelength matrices) were obtained and processed by multivariate curve resolution-alternating least-squares (MCR-ALS). Applying MCR-ALS allowed quantification of the analytes even in the presence of partially overlapped chromatographic and spectral bands among these compounds and the potential interferents. The obtained results from the analysis of beer, wine, soda, juice, water and distilled beverage samples were compared with gas chromatography-mass spectrometry (GC-MS). Limits of detection (LODs) in the range 0.04-0.38ngmL(-1) were estimated in real samples after a very simple solid-phase extraction. All the samples were found to contain at least three EDs, in concentrations as high as 334ngmL(-1). Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Characterization of the evolution of the volume fraction of precipitates in aged AlMgSiCu alloys using DSC technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaeili, Shahrzad; Lloyd, David J.

    2005-11-15

    Differential scanning calorimetry is used to quantify the evolution of the volume fraction of precipitates during age hardening in AlMgSiCu alloys. The calorimetry tests are run on alloy samples after aging for various times at 180 deg. C and the change in the collective heat effects from the major precipitation and dissolution processes in each run are used to determine the precipitation state of the samples. The method is implemented on alloys with various thermal histories prior to artificial aging, including commercial pre-aging histories. The estimated values for the relative volume fraction of precipitates are compared with the results frommore » a newly developed analytical method using isothermal calorimetry and a related quantitative transmission electron microscopy work. Excellent agreement is obtained between the results from various methods.« less

  7. Angular Momentum Transport in Convectively Unstable Shear Flows

    NASA Astrophysics Data System (ADS)

    Käpylä, Petri J.; Brandenburg, Axel; Korpi, Maarit J.; Snellman, Jan E.; Narayan, Ramesh

    2010-08-01

    Angular momentum transport due to hydrodynamic turbulent convection is studied using local three-dimensional numerical simulations employing the shearing box approximation. We determine the turbulent viscosity from non-rotating runs over a range of values of the shear parameter and use a simple analytical model in order to extract the non-diffusive contribution (Λ-effect) to the stress in runs where rotation is included. Our results suggest that the turbulent viscosity is on the order of the mixing length estimate and weakly affected by rotation. The Λ-effect is non-zero and a factor of 2-4 smaller than the turbulent viscosity in the slow rotation regime. We demonstrate that for Keplerian shear, the angular momentum transport can change sign and be outward when the rotation period is greater than the turnover time, i.e., when the Coriolis number is below unity. This result seems to be relatively independent of the value of the Rayleigh number.

  8. NASA CF6 jet engine diagnostics program: Long-term CF6-6D low-pressure turbine deterioration

    NASA Technical Reports Server (NTRS)

    Smith, J. J.

    1979-01-01

    Back-to-back performance tests were run on seven airline low pressure turbine (LPT) modules and four new CF6-6D modules. Back-to-back test cell runs, in which an airline LPT module was directly compared to a new production module, were included. The resulting change, measured in fuel burn, equaled the level of LPT module deterioration. Three of the LPT modules were analytically inspected followed by a back-to-back test cell run to evaluate current refurbishment techniques.

  9. Auditing of chromatographic data.

    PubMed

    Mabie, J T

    1998-01-01

    During a data audit, it is important to ensure that there is clear documentation and an audit trail. The Quality Assurance Unit should review all areas, including the laboratory, during the conduct of the sample analyses. The analytical methodology that is developed should be documented prior to sample analyses. This is an important document for the auditor, as it is the instrumental piece used by the laboratory personnel to maintain integrity throughout the process. It is expected that this document will give insight into the sample analysis, run controls, run sequencing, instrument parameters, and acceptance criteria for the samples. The sample analysis and all supporting documentation should be audited in conjunction with this written analytical method and any supporting Standard Operating Procedures to ensure the quality and integrity of the data.

  10. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  11. Closed cycle electric discharge laser design investigation

    NASA Technical Reports Server (NTRS)

    Baily, P. K.; Smith, R. C.

    1978-01-01

    Closed cycle CO2 and CO electric discharge lasers were studied. An analytical investigation assessed scale-up parameters and design features for CO2, closed cycle, continuous wave, unstable resonator, electric discharge lasing systems operating in space and airborne environments. A space based CO system was also examined. The program objectives were the conceptual designs of six CO2 systems and one CO system. Three airborne CO2 designs, with one, five, and ten megawatt outputs, were produced. These designs were based upon five minute run times. Three space based CO2 designs, with the same output levels, were also produced, but based upon one year run times. In addition, a conceptual design for a one megawatt space based CO laser system was also produced. These designs include the flow loop, compressor, and heat exchanger, as well as the laser cavity itself. The designs resulted in a laser loop weight for the space based five megawatt system that is within the space shuttle capacity. For the one megawatt systems, the estimated weight of the entire system including laser loop, solar power generator, and heat radiator is less than the shuttle capacity.

  12. Infinite horizon optimal impulsive control with applications to Internet congestion control

    NASA Astrophysics Data System (ADS)

    Avrachenkov, Konstantin; Habachi, Oussama; Piunovskiy, Alexey; Zhang, Yi

    2015-04-01

    We investigate infinite-horizon deterministic optimal control problems with both gradual and impulsive controls, where any finitely many impulses are allowed simultaneously. Both discounted and long-run time-average criteria are considered. We establish very general and at the same time natural conditions, under which the dynamic programming approach results in an optimal feedback policy. The established theoretical results are applied to the Internet congestion control, and by solving analytically and nontrivially the underlying optimal control problems, we obtain a simple threshold-based active queue management scheme, which takes into account the main parameters of the transmission control protocols, and improves the fairness among the connections in a given network.

  13. A continuous analog of run length distributions reflecting accumulated fractionation events.

    PubMed

    Yu, Zhe; Sankoff, David

    2016-11-11

    We propose a new, continuous model of the fractionation process (duplicate gene deletion after polyploidization) on the real line. The aim is to infer how much DNA is deleted at a time, based on segment lengths for alternating deleted (invisible) and undeleted (visible) regions. After deriving a number of analytical results for "one-sided" fractionation, we undertake a series of simulations that help us identify the distribution of segment lengths as a gamma with shape and rate parameters evolving over time. This leads to an inference procedure based on observed length distributions for visible and invisible segments. We suggest extensions of this mathematical and simulation work to biologically realistic discrete models, including two-sided fractionation.

  14. Simultaneous determination of guanidinoacetate, creatine and creatinine in urine and plasma by un-derivatized liquid chromatography-tandem mass spectrometry.

    PubMed

    Carling, R S; Hogg, S L; Wood, T C; Calvin, J

    2008-11-01

    Creatine plays an important role in the storage and transmission of phosphate-bound energy. The cerebral creatine deficiency syndromes (CCDS) comprise three inherited defects in creatine biosynthesis and transport. They are characterized by mental retardation, speech and language delay and epilepsy. All three disorders cause low-creatine signal on brain magnetic resonance spectroscopy (MRS); however, MRS may not be readily available and even when it is, biochemical tests are required to determine the underlying disorder. Analysis was performed by liquid chromatography-tandem mass spectrometry in positive ionization mode. Samples were analysed underivatized using a rapid 'dilute and shoot' approach. Chromatographic separation of the three compounds was achieved. Stable isotope internal standards were used for quantification. Creatine, creatinine and guanidinoacetate were measured with a 2.5 minute run time. For guanidinoacetate, the standard curve was linear to at least 5000 mumol/L and for creatine and creatinine it was linear to at least 25 mmol/L. The lower limit of quantitation was 0.4 mumol/L for creatine and guanidinoacetate and 0.8 mumol/L for creatinine. Recoveries ranged from 86% to 106% for the three analytes. Intra- and inter-assay variation for each analyte was <10% in both urine and plasma. A tandem mass spectrometric method has been developed and validated for the underivatized determination of guanidinoacetate, creatine and creatinine in human urine and plasma. Minimal sample preparation coupled with a rapid run time make the method applicable to the routine screening of patients with suspected CCDS.

  15. Gas chromatography and ultra high performance liquid chromatography tandem mass spectrometry methods for the determination of selected endocrine disrupting chemicals in human breast milk after stir-bar sorptive extraction.

    PubMed

    Rodríguez-Gómez, R; Zafra-Gómez, A; Camino-Sánchez, F J; Ballesteros, O; Navalón, A

    2014-07-04

    In the present work, two specific, accurate and sensitive methods for the determination of endocrine disrupting chemicals (EDCs) in human breast milk are developed and validated. Bisphenol A and its main chlorinated derivatives, five benzophenone-UV filters and four parabens were selected as target analytes. The method involves a stir-bar sorptive extraction (SBSE) procedure followed by a solvent desorption prior to GC-MS/MS or UHPLC-MS/MS analysis. A derivatization step is also necessary when GC analysis is performed. The GC column used was a capillary HP-5MS with a run time of 26min. For UHPLC analysis, the stationary phase was a non-polar Acquity UPLC(®) BEH C18 column and the run time was 10min. In both cases, the analytes were detected and quantified using a triple quadrupole mass spectrometer (QqQ). Quality parameters such as linearity, accuracy (trueness and precision), sensitivity and selectivity were examined and yielded good results. The limits of quantification (LOQs) ranged from 0.3 to 5.0ngmL(-1) for GC and from 0.2 to 1.0ngmL(-1) for LC. The relative standard deviation (RSD) was lower than 15% and the recoveries ranged from 92 to 114% in all cases, being slightly unfavorable the results obtained with LC. The methods were satisfactorily applied for the determination of target compounds in human milk samples from 10 randomly selected women. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Data Driven Smart Proxy for CFD Application of Big Data Analytics & Machine Learning in Computational Fluid Dynamics, Report Two: Model Building at the Cell Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansari, A.; Mohaghegh, S.; Shahnam, M.

    To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less

  17. Working with Real Data: Getting Analytic Element Groundwater Model Results to Honor Field Data

    NASA Astrophysics Data System (ADS)

    Congdon, R. D.

    2014-12-01

    Models of groundwater flow often work best when very little field data exist. In such cases, some knowledge of the depth to the water table, annual precipitation totals, and basic geological makeup is sufficient to produce a reasonable-looking and potentially useful model. However, in this case where a good deal of information is available regarding depth to bottom of a dune field aquifer, attempting to incorporate the data set into the model has variously resulted in convergence, failure to achieve target water level criteria, or complete failure to converge. The first model did not take the data set into consideration, but used general information that the aquifer was thinner in the north and thicker in the south. This model would run and produce apparently useful results. The first attempt at satisfying the data set; in this case 51 wells showing the bottom elevation of a Pacific coast sand dune aquifer, was to use the isopach interpretation of Robinson (OFR 73-241). Using inhomogeneities (areas of equal characteristics) delineated by Robinson's isopach diagram did not enable an adequate fit to the water table lakes, and caused convergence problems when adding pumping wells. The second attempt was to use a Thiessen polygon approach, creating an aquifer thickness zone for each data point. The results for the non-pumping scenario were better, but run times were considerably greater. Also, there were frequent runs with non-convergence, especially when water supply wells were added. Non-convergence may be the result of the lake line-sinks crossing the polygon boundaries or proximity of pumping wells to inhomogeneity boundaries. The third approach was to merge adjacent polygons of similar depths; in this case within 5% of each other. The results and run times were better, but matching lake levels was not satisfactory. The fourth approach was to reduce the number of inhomogeneities to four, and to average the depth data over the inhomogeneity. The thicknesses were varied within 5% of the average until the lake levels were closely matched. This last methodology proved satisfactory and stable. The data were honored and the solver worked relatively quickly; thus preserving the simplicity and speed of the Analytic Element method; and various pumping scenarios were stable.

  18. Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.

    2016-12-01

    The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.

  19. Direct analysis of six antibiotics in wastewater samples using rapid high-performance liquid chromatography coupled with diode array detector: a chemometric study towards green analytical chemistry.

    PubMed

    Vosough, Maryam; Rashvand, Masoumeh; Esfahani, Hadi M; Kargosha, Kazem; Salemi, Amir

    2015-04-01

    In this work, a rapid HPLC-DAD method has been developed for the analysis of six antibiotics (amoxicillin, metronidazole, sulfamethoxazole, ofloxacine, sulfadiazine and sulfamerazine) in the sewage treatment plant influent and effluent samples. Decreasing the chromatographic run time to less than 4 min as well as lowering the cost per analysis, were achieved through direct injection of the samples into the HPLC system followed by chemometric analysis. The problem of the complete separation of the analytes from each other and/or from the matrix ingredients was resolved as a posteriori. The performance of MCR/ALS and U-PLS/RBL, as second-order algorithms, was studied and comparable results were obtained from implication of these modeling methods. It was demonstrated that the proposed methods could be used promisingly as green analytical strategies for detection and quantification of the targeted pollutants in wastewater samples while avoiding the more complicated high cost instrumentations. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. LC-MS based analysis of endogenous steroid hormones in human hair.

    PubMed

    Gao, Wei; Kirschbaum, Clemens; Grass, Juliane; Stalder, Tobias

    2016-09-01

    The quantification of endogenous steroid hormone concentrations in hair is increasingly used as a method for obtaining retrospective information on long-term integrated hormone exposure. Several different analytical procedures have been employed for hair steroid analysis, with liquid chromatography-mass spectrometry (LC-MS) being recognized as a particularly powerful analytical tool. Several methodological aspects affect the performance of LC-MS systems for hair steroid analysis, including sample preparation and pretreatment, steroid extraction, post-incubation purification, LC methodology, ionization techniques and MS specifications. Here, we critically review the differential value of such protocol variants for hair steroid hormones analysis, focusing on both analytical quality and practical feasibility issues. Our results show that, when methodological challenges are adequately addressed, LC-MS protocols can not only yield excellent sensitivity and specificity but are also characterized by relatively simple sample processing and short run times. This makes LC-MS based hair steroid protocols particularly suitable as a high-quality option for routine application in research contexts requiring the processing of larger numbers of samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Novel HPLC-UV Method for Simultaneous Determination of Fat-soluble Vitamins and Coenzyme Q10 in Medicines and Supplements.

    PubMed

    Temova-Rakuša, Žane; Srečnik, Eva; Roškar, Robert

    2017-09-01

    A precise, accurate and rapid HPLC-UV method for simultaneous determination of fat-soluble vitamins (vitamin D3, E-acetate, K1, β-carotene, A-palmitate) and coenzyme Q10 was developed and validated according to ICH guidelines. Optimal chromatographic separation of the analytes in minimal analysis time (8 min) was achieved on a Luna C18 150 × 4.6 mm column using a mixture of acetonitrile, tetrahydrofuran and water (50:45:5, v/v/v). The described reversed phase HPLC method is the first published for quantification of these five fat-soluble vitamins and coenzyme Q10 within a single chromatographic run. The method was further applied for quantification of the analytes in selected liquid and solid dosage forms, registered as nutritional supplements and prescription medicines, which confirmed its suitability for routine analysis.

  2. Simultaneous determination of bromhexine hydrochloride and methyl and propyl p-hydroxybenzoate and determination of dextromethorphan hydrobromide in cough-cold syrup by high-performance liquid chromatography.

    PubMed

    Rauha, J P; Salomies, H; Aalto, M

    1996-11-01

    Liquid chromatographic methods were developed for the determination of bromhexine hydrochloride, methyl p-hydroxybenzoate and propyl p-hydroxybenzoate (method A) and dextromethorphan hydrobromide (method B) in cough-cold syrup formulations. Reversed-phase analytical columns (150 mm x 3.9 mm i.d.) were used with (A) C18 and (B) phenyl as stationary phases and mixtures of (A) acetonitrile and aqueous 15 mM triethylamine solution (43:57) and (B) methanol and aqueous 3% ammonium formate buffer solution (53:47) as mobile phases at a flow rate of 1.0 ml min-1. Both aqueous components were adjusted to pH 3.9. UV detection of analytes was at (A) 245 nm and (B) 278 nm. In both methods, the time required for an HPLC run giving good separations and recoveries was less than 8 min.

  3. Single-step transesterification with simultaneous concentration and stable isotope analysis of fatty acid methyl esters by gas chromatography-combustion-isotope ratio mass spectrometry.

    PubMed

    Panetta, Robert J; Jahren, A Hope

    2011-05-30

    Gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is increasingly applied to food and metabolic studies for stable isotope analysis (δ(13) C), with the quantification of analyte concentration often obtained via a second alternative method. We describe a rapid direct transesterification of triacylglycerides (TAGs) for fatty acid methyl ester (FAME) analysis by GC-C-IRMS demonstrating robust simultaneous quantification of amount of analyte (mean r(2) =0.99, accuracy ±2% for 37 FAMEs) and δ(13) C (±0.13‰) in a single analytical run. The maximum FAME yield and optimal δ(13) C values are obtained by derivatizing with 10% (v/v) acetyl chloride in methanol for 1 h, while lower levels of acetyl chloride and shorter reaction times skewed the δ(13) C values by as much as 0.80‰. A Bland-Altman evaluation of the GC-C-IRMS measurements resulted in excellent agreement for pure oils (±0.08‰) and oils extracted from French fries (±0.49‰), demonstrating reliable simultaneous quantification of FAME concentration and δ(13) C values. Thus, we conclude that for studies requiring both the quantification of analyte and δ(13) C data, such as authentication or metabolic flux studies, GC-C-IRMS can be used as the sole analytical method. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Fame and obsolescence: Disentangling growth and aging dynamics of patent citations.

    PubMed

    Higham, K W; Governale, M; Jaffe, A B; Zülicke, U

    2017-04-01

    We present an analysis of citations accrued over time by patents granted by the United States Patent and Trademark Office in 1998. In contrast to previous studies, a disaggregation by technology category is performed, and exogenously caused citation-number growth is controlled for. Our approach reveals an intrinsic citation rate that clearly separates into an-in the long run, exponentially time-dependent-aging function and a completely time-independent preferential-attachment-type growth kernel. For the general case of such a separable citation rate, we obtain the time-dependent citation distribution analytically in a form that is valid for any functional form of its aging and growth parts. Good agreement between theory and long-time characteristics of patent-citation data establishes our work as a useful framework for addressing still open questions about knowledge-propagation dynamics, such as the observed excess of citations at short times.

  5. Fame and obsolescence: Disentangling growth and aging dynamics of patent citations

    NASA Astrophysics Data System (ADS)

    Higham, K. W.; Governale, M.; Jaffe, A. B.; Zülicke, U.

    2017-04-01

    We present an analysis of citations accrued over time by patents granted by the United States Patent and Trademark Office in 1998. In contrast to previous studies, a disaggregation by technology category is performed, and exogenously caused citation-number growth is controlled for. Our approach reveals an intrinsic citation rate that clearly separates into an—in the long run, exponentially time-dependent—aging function and a completely time-independent preferential-attachment-type growth kernel. For the general case of such a separable citation rate, we obtain the time-dependent citation distribution analytically in a form that is valid for any functional form of its aging and growth parts. Good agreement between theory and long-time characteristics of patent-citation data establishes our work as a useful framework for addressing still open questions about knowledge-propagation dynamics, such as the observed excess of citations at short times.

  6. Consumer Search, Rationing Rules, and the Consequence for Competition

    NASA Astrophysics Data System (ADS)

    Ruebeck, Christopher S.

    Firms' conjectures about demand are consequential in oligopoly games. Through agent-based modeling of consumers' search for products, we can study the rationing of demand between capacity-constrained firms offering homogeneous products and explore the robustness of analytically solvable models' results. After algorithmically formalizing short-run search behavior rather than assuming a long-run average, this study predicts stronger competition in a two-stage capacity-price game.

  7. The effect of gas dynamics on semi-analytic modelling of cluster galaxies

    NASA Astrophysics Data System (ADS)

    Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.

    2008-12-01

    We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.

  8. Study of a two-dimension transient heat propagation in cylindrical coordinates by means of two finite difference methods

    NASA Astrophysics Data System (ADS)

    Dumencu, A.; Horbaniuc, B.; Dumitraşcu, G.

    2016-08-01

    The analytical approach of unsteady conduction heat transfer under actual conditions represent a very difficult (if not insurmountable) problem due to the issues related to finding analytical solutions for the conduction heat transfer equation. Various techniques have been developed in order to overcome these difficulties, among which the alternate directions method and the decomposition method. Both of them are particularly suited for two-dimension heat propagation. The paper deals with both techniques in order to verify whether the results provided are in good accordance. The studied case consists of a long hollow cylinder, and considers that the time-dependent temperature field varies both in the radial and the axial directions. The implicit technique is used in both methods and involves the simultaneous solving of a set of equations for all of the nodes for each time step successively for each of the two directions. Gauss elimination is used to obtain the solution of the set, representing the nodal temperatures. After using the two techniques the results show a very good agreement, and since the decomposition is easier to use in terms of computer code and running time, this technique seems to be more recommendable.

  9. The impact of repeat-testing of common chemistry analytes at critical concentrations.

    PubMed

    Onyenekwu, Chinelo P; Hudson, Careen L; Zemlin, Annalise E; Erasmus, Rajiv T

    2014-12-01

    Early notification of critical values by the clinical laboratory to the treating physician is a requirement for accreditation and is essential for effective patient management. Many laboratories automatically repeat a critical value before reporting it to prevent possible misdiagnosis. Given today's advanced instrumentation and quality assurance practices, we questioned the validity of this approach. We performed an audit of repeat-testing in our laboratory to assess for significant differences between initial and repeated test results, estimate the delay caused by repeat-testing and to quantify the cost of repeating these assays. A retrospective audit of repeat-tests for sodium, potassium, calcium and magnesium in the first quarter of 2013 at Tygerberg Academic Laboratory was conducted. Data on the initial and repeat-test values and the time that they were performed was extracted from our laboratory information system. The Clinical Laboratory Improvement Amendment criteria for allowable error were employed to assess for significant difference between results. A total of 2308 repeated tests were studied. There was no significant difference in 2291 (99.3%) of the samples. The average delay ranged from 35 min for magnesium to 42 min for sodium and calcium. At least 2.9% of laboratory running costs for the analytes was spent on repeating them. The practice of repeating a critical test result appears unnecessary as it yields similar results, delays notification to the treating clinician and increases laboratory running costs.

  10. Differential metabolite levels in response to spawning-induced inappetence in Atlantic salmon Salmo salar.

    PubMed

    Cipriano, Rocco C; Smith, McKenzie L; Vermeersch, Kathleen A; Dove, Alistair D M; Styczynski, Mark P

    2015-03-01

    Atlantic salmon Salmo salar undergo months-long inappetence during spawning, but it is not known whether this inappetence is a pathological state or one for which the fish are adapted. Recent work has shown that inappetent whale sharks can exhibit circulating metabolite profiles similar to ketosis known to occur in humans during starvation. In this work, metabolite profiling was used to explore differences in analyte profiles between a cohort of inappetent spawning run Atlantic salmon and captively reared animals that were fed up to and through the time of sampling. The two classes of animals were easily distinguished by their metabolite profiles. The sea-run fish had elevated ɷ-9 fatty acids relative to the domestic feeding animals, while other fatty acid concentrations were reduced. Sugar alcohols were generally elevated in inappetent animals, suggesting potentially novel metabolic responses or pathways in fish that feature these compounds. Compounds expected to indicate a pathological catabolic state were not more abundant in the sea-run fish, suggesting that the animals, while inappetent, were not stressed in an unnatural way. These findings demonstrate the power of discovery-based metabolomics for exploring biochemistry in poorly understood animal models. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Night-time lights: A global, long term look at links to socio-economic trends

    PubMed Central

    Zavala-Araiza, Daniel; Wagner, Gernot

    2017-01-01

    We use a parallelized spatial analytics platform to process the twenty-one year totality of the longest-running time series of night-time lights data—the Defense Meteorological Satellite Program (DMSP) dataset—surpassing the narrower scope of prior studies to assess changes in area lit of countries globally. Doing so allows a retrospective look at the global, long-term relationships between night-time lights and a series of socio-economic indicators. We find the strongest correlations with electricity consumption, CO2 emissions, and GDP, followed by population, CH4 emissions, N2O emissions, poverty (inverse) and F-gas emissions. Relating area lit to electricity consumption shows that while a basic linear model provides a good statistical fit, regional and temporal trends are found to have a significant impact. PMID:28346500

  12. Ultra-high Performance Liquid Chromatography Tandem Mass-Spectrometry for Simple and Simultaneous Quantification of Cannabinoids

    PubMed Central

    Jamwal, Rohitash; Topletz, Ariel R.; Ramratnam, Bharat; Akhlaghi, Fatemeh

    2017-01-01

    Cannabis is used widely in the United States, both recreationally and for medical purposes. Current methods for analysis of cannabinoids in human biological specimens rely on complex extraction process and lengthy analysis time. We established a rapid and simple assay for quantification of Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), 11-hydroxy Δ9-tetrahydrocannabinol (11-OH THC) and 11-nor-9-carboxy-Δ9-tetrahydrocannbinol (THC-COOH) in human plasma by U-HPLC-MS/MS using Δ9-tetrahydrocannabinol-D3 as the internal standard. Chromatographic separation was achieved on an Acquity BEH C18 column using a gradient comprising of water (0.1% formic acid) and methanol (0.1% formic acid) over a 6 min run-time. Analytes from 200 µL plasma were extracted using acetonitrile (containing 1% formic acid and THC-D3). Mass spectrometry was performed in positive ionization mode, and total ion chromatogram was used for quantification of analytes. The assay was validated according to guidelines set forth by Food and Drug Administration of United States. An eight-point calibration curve was fitted with quadratic regression (r2>0.99) from 1.56 to 100 ng mL−1 and a lower limit of quantification (LLOQ) of 1.56 ng mL−1 was achieved. Accuracy and precision calculated from six calibration curves was between 85 to 115% while the mean extraction recovery was >90% for all the analytes. Several plasma phospholipids eluted after the analytes thus did not interfere with the assay. Bench-top, freeze-thaw, auto-sampler and short-term stability ranged from 92.7 to 106.8% of nominal values. Application of the method was evaluated by quantification of analytes in human plasma from six subjects. PMID:28192758

  13. Long-Term Stability of Volatile Nitrosamines in Human Urine.

    PubMed

    Hodgson, James A; Seyler, Tiffany H; Wang, Lanqing

    2016-07-01

    Volatile nitrosamines (VNAs) are established teratogens and carcinogens in animals and classified as probable (group 2A) and possible (group 2B) carcinogens in humans by the IARC. High levels of VNAs have been detected in tobacco products and in both mainstream and sidestream smoke. VNA exposure may lead to lipid peroxidation and oxidative stress (e.g., inflammation), chronic diseases (e.g., diabetes) and neurodegenerative diseases (e.g., Alzheimer's disease). To conduct epidemiological studies on the effects of VNA exposure, short-term and long-term stabilities of VNAs in the urine matrix are needed. In this report, the stability of six VNAs (N-nitrosodimethylamine, N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitrosopiperidine, N-nitrosopyrrolidine and N-nitrosomorpholine) in human urine is analyzed for the first time using in vitro blank urine pools fortified with a standard mixture of all six VNAs. Over a 24-day period, analytes were monitored in samples stored at ∼20°C (collection temperature), 4-10°C (transit temperature) and -20 and -70°C (long-term storage temperatures). All six analytes were stable for 24 days at all temperatures (n = 15). The analytes were then analyzed over a longer time period at -70°C; all analytes were stable for up to 1 year (n = 62). A subset of 44 samples was prepared as a single batch and stored at -20°C, the temperature at which prepared samples are stored. These prepared samples were run in duplicate weekly over 10 weeks, and all six analytes were stable over the entire period (n = 22). Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  14. Hair analysis for abused drugs by capillary zone electrophoresis with field-amplified sample stacking.

    PubMed

    Tagliaro, F; Manetto, G; Crivellente, F; Scarcella, D; Marigo, M

    1998-04-05

    The present paper describes the methodological optimisation and validation of a capillary zone electrophoresis method for the determination of morphine, cocaine and 3,4-methylenedioxymethamphetamine (MDMA) in hair, with injection based on field-amplified sample stacking. Diode array UV absorption detection was used to improve analytical selectivity and identification power. Analytical conditions: running buffer 100 mM potassium phosphate adjusted to pH 2.5 with phosphoric acid, applied potential 10 kV, temperature 20 degrees C, injection by electromigration at 10 kV for 10 s, detection by UV absorption at the fixed wavelength of 200 nm or by recording the full spectrum between 190 and 400 nm. Injection conditions: the dried hair extracts were reconstituted with a low-conductivity solvent (0.1 mM formic acid), the injection end of the capillary was dipped in water for 5 s without applying pressure (external rinse step), then a plug of 0.1 mM phosphoric acid was loaded by applying 0.5 psi for 10 s and, finally, the sample was injected electrokinetically at 10 kV for 10 s. Under the described conditions, the limit of detection was 2 ng/ml for MDMA, 8 ng/ml for cocaine and 6 ng/ml for morphine (with a signal-to-noise ratio of 5). The lowest concentration suitable for recording interpretable spectra was about 10-20-times the limit of detection of each analyte. The intraday and day-to-day reproducibility of migration times (n = 6), with internal standardisation, was characterised by R.S.D. values < or = 0.6%; peak area R.S.D.s were better than 10% in intraday and than 15% in day-to-day experiments. Analytical linearity was good with R2 better than 0.9990 for all the analytes.

  15. Analytical condition inspection and extension of time between overhaul of F3-30 engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakao, M.; Ikeyama, M.; Abe, S.

    1992-04-01

    F3-30 is the low-bypass-ratio turbofan engine developed to power the T-4 intermediate trainer for the Japan Air Self Defense Force (JASDF). The actual field service was started in Sept., 1988. This paper reports on the program to extend time between overhaul (TBO) of the F3-30 which has been running. Analytical condition inspection (ACI) and accelerated mission testing (AMT) were conducted to confirm sufficient durability to extend TBO. Most deteriorations of parts and performance due to AMT were also found by ACI after field operation with approximately the same deterioration rate. On the other hand, some deteriorations were found by ACImore » only. These results show that ACI after field operation is also necessary to confirm the TBO extension, although AMT simulates the deterioration in field operations very well. The deteriorations that would be caused by the field operation during one extended-TBO were estimated with the results of ACI and AMT, and it was concluded that the F3-30 has sufficient durability for TBO extension to the next step.« less

  16. Determination of opiates and cocaine in urine by high pH mobile phase reversed phase UPLC-MS/MS.

    PubMed

    Berg, Thomas; Lundanes, Elsa; Christophersen, Asbjørg S; Strand, Dag Helge

    2009-02-01

    A fast and selective ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for the determination of opiates (morphine, codeine, 6-monoacetylmorphine (6-MAM), pholcodine, oxycodone, ethylmorphine), cocaine and benzoylecgonine in urine has been developed and validated. Sample preparation was performed by solid phase extraction (SPE) on a mixed mode cation exchange (MCX) cartridge. For optimized chromatographic performance with repeatable retention times, narrow and symmetrical peaks, and focusing of all analytes at the column inlet at gradient start, a basic mobile phase consisting of 5mM ammonium bicarbonate, pH 10.2, and methanol (MeOH) was chosen. Positive electrospray ionization (ESI(+)) MS/MS detection was performed with a minimum of two multiple reaction monitoring (MRM) transitions for each analyte. Deuterium labelled-internal standards were used for six of the analytes. Between-assay retention time repeatabilities (n=10 series, 225 injections in total) had relative standard deviation (RSD) values within 0.1-0.6%. Limit of detection (LOD) and limit of quantification (LOQ) values were in the range 0.003-0.05 microM (0.001-0.02 microg/mL) and 0.01-0.16 microM (0.003-0.06 microg/mL), respectively. The RSD values of the between-assay repeatabilities of concentrations were

  17. An improved algorithm for balanced POD through an analytic treatment of impulse response tails

    NASA Astrophysics Data System (ADS)

    Tu, Jonathan H.; Rowley, Clarence W.

    2012-06-01

    We present a modification of the balanced proper orthogonal decomposition (balanced POD) algorithm for systems with simple impulse response tails. In this new method, we use dynamic mode decomposition (DMD) to estimate the slowly decaying eigenvectors that dominate the long-time behavior of the direct and adjoint impulse responses. This is done using a new, low-memory variant of the DMD algorithm, appropriate for large datasets. We then formulate analytic expressions for the contribution of these eigenvectors to the controllability and observability Gramians. These contributions can be accounted for in the balanced POD algorithm by simply appending the impulse response snapshot matrices (direct and adjoint, respectively) with particular linear combinations of the slow eigenvectors. Aside from these additions to the snapshot matrices, the algorithm remains unchanged. By treating the tails analytically, we eliminate the need to run long impulse response simulations, lowering storage requirements and speeding up ensuing computations. To demonstrate its effectiveness, we apply this method to two examples: the linearized, complex Ginzburg-Landau equation, and the two-dimensional fluid flow past a cylinder. As expected, reduced-order models computed using an analytic tail match or exceed the accuracy of those computed using the standard balanced POD procedure, at a fraction of the cost.

  18. Analytical Eco-Scale for Assessing the Greenness of a Developed RP-HPLC Method Used for Simultaneous Analysis of Combined Antihypertensive Medications.

    PubMed

    Mohamed, Heba M; Lamie, Nesrine T

    2016-09-01

    In the past few decades the analytical community has been focused on eliminating or reducing the usage of hazardous chemicals and solvents, in different analytical methodologies, that have been ascertained to be extremely dangerous to human health and environment. In this context, environmentally friendly, green, or clean practices have been implemented in different research areas. This study presents a greener alternative of conventional RP-HPLC methods for the simultaneous determination and quantitative analysis of a pharmaceutical ternary mixture composed of telmisartan, hydrochlorothiazide, and amlodipine besylate, using an ecofriendly mobile phase and short run time with the least amount of waste production. This solvent-replacement approach was feasible without compromising method performance criteria, such as separation efficiency, peak symmetry, and chromatographic retention. The greenness profile of the proposed method was assessed and compared with reported conventional methods using the analytical Eco-Scale as an assessment tool. The proposed method was found to be greener in terms of usage of hazardous chemicals and solvents, energy consumption, and production of waste. The proposed method can be safely used for the routine analysis of the studied pharmaceutical ternary mixture with a minimal detrimental impact on human health and the environment.

  19. Determination of ∆-9-Tetrahydrocannabinol (THC), 11-hydroxy-THC, 11-nor-9-carboxy-THC and Cannabidiol in Human Plasma using Gas Chromatography-Tandem Mass Spectrometry.

    PubMed

    Andrenyak, David M; Moody, David E; Slawson, Matthew H; O'Leary, Daniel S; Haney, Margaret

    2017-05-01

    Two marijuana compounds of particular medical interest are delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD). A gas chromatography-tandem mass spectrometry (GC-MS-MS) method was developed to test for CBD, THC, hydroxy-THC (OH-THC) and carboxy-THC (COOH-THC) in human plasma. Calibrators (THC and OH-THC, 0.1 to 100; CBD, 0.25 to 100; COOH-THC, 0.5-500 ng/mL) and controls (0.3, 5 and 80 ng/mL, except COOH-THC at 1.5, 25 and 400 ng/mL) were prepared in blank matrix. Deuterated (d3) internal standards were added to 1-mL samples. Preparation involved acetonitrile precipitation, liquid-liquid extraction (hexane:ethyl acetate, 9:1), and MSTFA derivatization. An Agilent 7890 A GC was interfaced with an Agilent 7000 MS Triple Quadrupole. Selected reaction monitoring was employed. Blood samples were provided from a marijuana smoking study (two participants) and a CBD ingestion study (eight participants). Three analytes with the same transitions (THC, OH-THC and COOH-THC) were chromatographically separated. Matrix selectivity studies showed endogenous chromatographic peak area ratios (PAR) at the analyte retention times were <20% of the analyte limit of quantitation PAR. The intra-assay accuracy ranged from 83.5% to 118% of target and the intra-run imprecision ranged from 2.0% to 19.1%. The inter-assay accuracy ranged from 90.3% to 104% of target and the inter-run imprecision ranged from 6.5% to 12.0%. Stability was established for 25 hours at room temperature, 207 days at -20°C, after three freeze-thaw cycles and for 26 days for rederivatized processed samples. After smoking marijuana predictable concentrations of THC, OH-THC and COOH-THC were seen; low concentrations of CBD were detected at early time points. In moderate users who had not smoked for at least 9 hours before ingesting an 800 mg oral dose of CBD, the method was sensitive enough to follow residual concentrations of THC and OH-THC; sustained COOH-THC concentrations over 50 ng/mL validated its higher analytical range. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Determination of ∆-9-Tetrahydrocannabinol (THC), 11-hydroxy-THC, 11-nor-9-carboxy-THC and Cannabidiol in Human Plasma using Gas Chromatography–Tandem Mass Spectrometry

    PubMed Central

    Andrenyak, David M.; Slawson, Matthew H.; O'Leary, Daniel S.; Haney, Margaret

    2017-01-01

    Abstract Two marijuana compounds of particular medical interest are delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD). A gas chromatography–tandem mass spectrometry (GC–MS-MS) method was developed to test for CBD, THC, hydroxy-THC (OH-THC) and carboxy-THC (COOH-THC) in human plasma. Calibrators (THC and OH-THC, 0.1 to 100; CBD, 0.25 to 100; COOH-THC, 0.5–500 ng/mL) and controls (0.3, 5 and 80 ng/mL, except COOH-THC at 1.5, 25 and 400 ng/mL) were prepared in blank matrix. Deuterated (d3) internal standards were added to 1-mL samples. Preparation involved acetonitrile precipitation, liquid–liquid extraction (hexane:ethyl acetate, 9:1), and MSTFA derivatization. An Agilent 7890 A GC was interfaced with an Agilent 7000 MS Triple Quadrupole. Selected reaction monitoring was employed. Blood samples were provided from a marijuana smoking study (two participants) and a CBD ingestion study (eight participants). Three analytes with the same transitions (THC, OH-THC and COOH-THC) were chromatographically separated. Matrix selectivity studies showed endogenous chromatographic peak area ratios (PAR) at the analyte retention times were <20% of the analyte limit of quantitation PAR. The intra-assay accuracy ranged from 83.5% to 118% of target and the intra-run imprecision ranged from 2.0% to 19.1%. The inter-assay accuracy ranged from 90.3% to 104% of target and the inter-run imprecision ranged from 6.5% to 12.0%. Stability was established for 25 hours at room temperature, 207 days at −20°C, after three freeze-thaw cycles and for 26 days for rederivatized processed samples. After smoking marijuana predictable concentrations of THC, OH-THC and COOH-THC were seen; low concentrations of CBD were detected at early time points. In moderate users who had not smoked for at least 9 hours before ingesting an 800 mg oral dose of CBD, the method was sensitive enough to follow residual concentrations of THC and OH-THC; sustained COOH-THC concentrations over 50 ng/mL validated its higher analytical range. PMID:28069869

  1. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  2. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  3. High-precision measurement of (186)Os/(188)Os and (187)Os/(188)Os: isobaric oxide corrections with in-run measured oxygen isotope ratios.

    PubMed

    Chu, Zhu-Yin; Li, Chao-Feng; Chen, Zhi; Xu, Jun-Jie; Di, Yan-Kun; Guo, Jing-Hui

    2015-09-01

    We present a novel method for high precision measurement of (186)Os/(188)Os and (187)Os/(188)Os ratios, applying isobaric oxide interference correction based on in-run measurements of oxygen isotopic ratios. For this purpose, we set up a static data collection routine to measure the main Os(16)O3(-) ion beams with Faraday cups connected to conventional 10(11) amplifiers, and (192)Os(16)O2(17)O(-) and (192)Os(16)O2(18)O(-) ion beams with Faraday cups connected to 10(12) amplifiers. Because of the limited number of Faraday cups, we did not measure (184)Os(16)O3(-) and (189)Os(16)O3(-) simultaneously in-run, but the analytical setup had no significant influence on final (186)Os/(188)Os and (187)Os/(188)Os data. By analyzing UMd, DROsS, an in-house Os solution standard, and several rock reference materials, including WPR-1, WMS-1a, and Gpt-5, the in-run measured oxygen isotopic ratios were proven to present accurate Os isotopic data. However, (186)Os/(188)Os and (187)Os/(188)Os data obtained with in-run O isotopic compositions for the solution standards and rock reference materials show minimal improvement in internal and external precision, compared to the conventional oxygen correction method. We concluded that, the small variations of oxygen isotopes during OsO3(-) analytical sessions are probably not the main source of error for high precision Os isotopic analysis. Nevertheless, use of run-specific O isotopic compositions is still a better choice for Os isotopic data reduction and eliminates the requirement of extra measurements of the oxygen isotopic ratios.

  4. Element Verification and Comparison in Sierra/Solid Mechanics Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohashi, Yuki; Roth, William

    2016-05-01

    The goal of this project was to study the effects of element selection on the Sierra/SM solutions to five common solid mechanics problems. A total of nine element formulations were used for each problem. The models were run multiple times with varying spatial and temporal discretization in order to ensure convergence. The first four problems have been compared to analytical solutions, and all numerical results were found to be sufficiently accurate. The penetration problem was found to have a high mesh dependence in terms of element type, mesh discretization, and meshing scheme. Also, the time to solution is shown formore » each problem in order to facilitate element selection when computer resources are limited.« less

  5. Shear-rate dependence of the viscosity of the Lennard-Jones liquid at the triple point

    NASA Astrophysics Data System (ADS)

    Ferrario, M.; Ciccotti, G.; Holian, B. L.; Ryckaert, J. P.

    1991-11-01

    High-precision molecular-dynamics (MD) data are reported for the shear viscosity η of the Lennard-Jones liquid at its triple point, as a function of the shear rate ɛ˙ for a large system (N=2048). The Green-Kubo (GK) value η(ɛ˙=0)=3.24+/-0.04 is estimated from a run of 3.6×106 steps (40 nsec). We find no numerical evidence of a t-3/2 long-time tail for the GK integrand (stress-stress time-correlation function). From our nonequilibrium MD results, obtained both at small and large values of ɛ˙, a consistent picture emerges that supports an analytical (quadratic at low shear rate) dependence of the viscosity on ɛ˙.

  6. Linear response approach to active Brownian particles in time-varying activity fields

    NASA Astrophysics Data System (ADS)

    Merlitz, Holger; Vuijk, Hidde D.; Brader, Joseph; Sharma, Abhinav; Sommer, Jens-Uwe

    2018-05-01

    In a theoretical and simulation study, active Brownian particles (ABPs) in three-dimensional bulk systems are exposed to time-varying sinusoidal activity waves that are running through the system. A linear response (Green-Kubo) formalism is applied to derive fully analytical expressions for the torque-free polarization profiles of non-interacting particles. The activity waves induce fluxes that strongly depend on the particle size and may be employed to de-mix mixtures of ABPs or to drive the particles into selected areas of the system. Three-dimensional Langevin dynamics simulations are carried out to verify the accuracy of the linear response formalism, which is shown to work best when the particles are small (i.e., highly Brownian) or operating at low activity levels.

  7. Special Issue on a Fault Tolerant Network on Chip Architecture

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan

    2010-06-01

    In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.

  8. Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.

    PubMed

    Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas

    2016-01-01

    A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).

  9. AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent

    Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less

  10. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  11. Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.

    PubMed

    Rudašová, Marína; Masár, Marián

    2016-01-01

    A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A big data geospatial analytics platform - Physical Analytics Integrated Repository and Services (PAIRS)

    NASA Astrophysics Data System (ADS)

    Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.

    2015-12-01

    A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data layers based on specific conditions (e.g analyze flooding risk of a property based on topography, soil ability to hold water, and forecasted precipitation) or retrieve information about locations that share similar weather and vegetation patterns during extreme weather events like heat wave.

  13. Robust algorithm for aligning two-dimensional chromatograms.

    PubMed

    Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel

    2012-11-06

    Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.

  14. Optimization of microwave-assisted extraction of hydrocarbons in marine sediments: comparison with the Soxhlet extraction method.

    PubMed

    Vázquez Blanco, E; López Mahía, P; Muniategui Lorenzo, S; Prada Rodríguez, D; Fernández Fernández, E

    2000-02-01

    Microwave energy was applied to extract polycyclic aromatic hydrocarbons (PAHs) and linear aliphatic hydrocarbons (LAHs) from marine sediments. The influence of experimental conditions, such as different extracting solvents and mixtures, microwave power, irradiation time and number of samples extracted per run has been tested using real marine sediment samples; volume of the solvent, sample quantity and matrix effects were also evaluated. The yield of extracted compounds obtained by microwave irradiation was compared with that obtained using the traditional Soxhlet extraction. The best results were achieved with a mixture of acetone and hexane (1:1), and recoveries ranged from 92 to 106%. The extraction time is dependent on the irradiation power and the number of samples extracted per run, so when the irradiation power was set to 500 W, the extraction times varied from 6 min for 1 sample to 18 min for 8 samples. Analytical determinations were carried out by high-performance liquid chromatography (HPLC) with an ultraviolet-visible photodiode-array detector for PAHs and gas chromatography (GC) using a FID detector for LAHs. To test the accuracy of the microwave-assisted extraction (MAE) technique, optimized methodology was applied to the analysis of standard reference material (SRM 1941), obtaining acceptable results.

  15. Novel approach to high-throughput determination of endocrine disruptors using recycled diatomaceous earth as a green sorbent phase for thin-film solid-phase microextraction combined with 96-well plate system.

    PubMed

    Kirschner, Nicolas; Dias, Adriana Neves; Budziak, Dilma; da Silveira, Cristian Berto; Merib, Josias; Carasek, Eduardo

    2017-12-15

    A sustainable approach to TF-SPME is presented using recycled diatomaceous earth, obtained from a beer purification process, as a green sorbent phase for the determination of bisphenol A (BPA), benzophenone (BzP), triclocarban (TCC), 4-methylbenzylidene camphor (4-MBC) and 2-ethylhexyl-p-methoxycinnamate (EHMC) in environmental water samples. TF-SPME was combined with a 96-well plate system allowing for high-throughput analysis due to the simultaneous extraction/desorption up to 96 samples. The proposed sorbent phase exhibited good stability in organic solvents, as well as satisfactory analytical performance. The optimized method consisted of 240 min of extraction at pH 6 with the addition of NaCl (15% w/v). A mixture of MeOH:ACN (50:50 v/v) was used for the desorption the analytes, using a time of 30 min. Limits of detection varied from 1 μg L -1 for BzP and TCC to 8 μg L -1 for the other analytes, and R 2 ranged from 0.9926 for 4-MBC to 0.9988 for BPA. This novel and straightforward approach offers an environmentally-friendly and very promising alternative for routine analysis. . The total sample preparation time per sample was approximately 2.8 min, which is a significant advantage when a large number of analytical run is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. SolarPILOT | Concentrating Solar Power | NREL

    Science.gov Websites

    tools. Unlike exclusively ray-tracing tools, SolarPILOT runs the analytical simulation engine that uses engine alongside a ray-tracing core for more detailed simulations. The SolTrace simulation engine is

  17. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  18. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  19. Factors affecting a cyanogen bromide-based assay of thiamin.

    PubMed

    Wyatt, D T; Lee, M; Hillman, R E

    1989-11-01

    We analyzed extensively a modified thiochrome method for thiamin analysis. Acid phosphatase (EC 3.1.3.2) from potato was superior to either alpha-amylase or acid phosphatase from wheat germ as a dephosphorylating agent. Timing of cyanogen bromide exposure was important, but the assay had good precision and accuracy. The standard curve was linear from 10 to 3000 nmol/L. The within-run and between-run coefficients of variation for total thiamin in whole blood were 3.6% and 7.4%, respectively. Analytical recoveries for low, intermediate, and high additions of thiamin to whole blood were 93-109%. Sample yield was increased by 41% (+/- 29% SD) with pre-assay freezing. Samples were stable for two days at room temperature, for seven days when refrigerated, and for two years when frozen. Previously unreported interference was seen with penicillin derivatives, and with several commonly used diuretic and antiepileptic medications. This assay may be suitable for population screening; 200 samples could be analyzed weekly at a cost of +0.20 per sample.

  20. Vigabatrin in dried plasma spots: validation of a novel LC-MS/MS method and application to clinical practice.

    PubMed

    Kostić, Nađa; Dotsikas, Yannis; Jović, Nebojša; Stevanović, Galina; Malenović, Anđelija; Medenica, Mirjana

    2014-07-01

    This paper presents a LC-MS/MS method for the determination of antiepileptic drug vigabatrin in dried plasma spots (DPS). Due to its zwitterionic chemical structure, a pre-column derivatization procedure was performed, aiming to yield enhanced ionization efficiency and improved chromatographic behaviour. Propyl chloroformate, in the presence of propanol, was selected as the best derivatization reagent, providing a strong signal along with reasonable run time. A relatively novel sample collection technique, DPS, was utilized, offering easy sample handling and analysis, using a sample in micro amount (∼5μL). Derivatized vigabatrin and its internal standard, 4-aminocyclohexanecarboxylic acid, were extracted by liquid-liquid extraction (LLE) and determined in positive ion mode by applying two SRM transitions per analyte. A Zorbax Eclipse XDB-C8 column (150×4.6mm, 5μm particle size) maintained at 30°C, was utilized with running mobile phase composed of acetonitrile: 0.15% formic acid (85:15, v/v). Flow rate was 550μL/min and total run time 4.5min. The assay exhibited excellent linearity over the concentration range of 0.500-50.0μg/mL, which is suitable for the determination of vigabatrin level after per os administration in children and youths with epilepsy, who were on vigabatrin therapy, with or without co-medication. Specificity, accuracy, precision, recovery, matrix-effect and stability were also estimated and assessed within acceptance criteria. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. SPIRE Data Evaluation and Nuclear IR Fluorescence Processes.

    DTIC Science & Technology

    1982-11-30

    so that all isotopes can be dealt with in a single run rather than a number of separate runs. At lower altitudes the radiance calculation needs to be...approximation can be inferred from the work of Neuendorffer (1982) on developing an analytic expression for the absorption of a single non-overlapping line...personnel by using prominent atmospheric infrared features such as the OH maximum, the HNO3 maximum, the CO3 4.3 um knee, etc. The azimuth however

  2. Observational constraints on loop quantum cosmology.

    PubMed

    Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji

    2011-11-18

    In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.

  3. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  4. $ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe

    USGS Publications Warehouse

    McGee, James J.

    1983-01-01

    $ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.

  5. New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications

    NASA Astrophysics Data System (ADS)

    Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris

    2016-05-01

    Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.

  6. Trace Level Determination of Mesityl Oxide and Diacetone Alcohol in Atazanavir Sulfate Drug Substance by a Gas Chromatography Method.

    PubMed

    Raju, K V S N; Pavan Kumar, K S R; Siva Krishna, N; Madhava Reddy, P; Sreenivas, N; Kumar Sharma, Hemant; Himabindu, G; Annapurna, N

    2016-01-01

    A capillary gas chromatography method with a short run time, using a flame ionization detector, has been developed for the quantitative determination of trace level analysis of mesityl oxide and diacetone alcohol in the atazanavir sulfate drug substance. The chromatographic method was achieved on a fused silica capillary column coated with 5% diphenyl and 95% dimethyl polysiloxane stationary phase (Rtx-5, 30 m x 0.53 mm x 5.0 µm). The run time was 20 min employing programmed temperature with a split mode (1:5) and was validated for specificity, sensitivity, precision, linearity, and accuracy. The detection and quantitation limits obtained for mesityl oxide and diacetone alcohol were 5 µg/g and 10 µg/g, respectively, for both of the analytes. The method was found to be linear in the range between 10 µg/g and 150 µg/g with a correlation coefficient greater than 0.999, and the average recoveries obtained in atazanavir sulfate were between 102.0% and 103.7%, respectively, for mesityl oxide and diacetone alcohol. The developed method was found to be robust and rugged. The detailed experimental results are discussed in this research paper.

  7. Simultaneous detection, typing and quantitation of oncogenic human papillomavirus by multiplex consensus real-time PCR.

    PubMed

    Jenkins, Andrew; Allum, Anne-Gry; Strand, Linda; Aakre, Randi Kersten

    2013-02-01

    A consensus multiplex real-time PCR test (PT13-RT) for the oncogenic human papillomavirus (HPV) types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59 and 66 is described. The test targets the L1 gene. Analytical sensitivity is between 4 and 400 GU (genomic units) in the presence of 500 ng of human DNA, corresponding to 75,000 human cells. HPV types are grouped into multiplex groups of 3 or 4 resulting in the use of 4 wells per sample and permitting up to 24 samples per run (including controls) in a standard 96-well real-time PCR instrument. False negative results are avoided by (a) measuring sample DNA concentration to control that sufficient cellular material is present and (b) including HPV type 6 as a homologous internal control in order to detect PCR inhibition or competition from other (non-oncogenic) HPV types. Analysis time from refrigerator to report is 8 h, including 2.5 h hands-on time. Relative to the HC2 test, the sensitivity and specificity were respectively 98% and 83%, the lower specificity being attributable to the higher analytical sensitivity of PT13-RT. To assess type determination comparison was made with a reversed line-blot test. Type concordance was high (κ=0.79) with discrepancies occurring mostly in multiple-positive samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  9. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  10. Simultaneous quantitation of delamanid (OPC-67683) and its eight metabolites in human plasma using UHPLC-MS/MS.

    PubMed

    Meng, Min; Smith, Benjamin; Johnston, Brad; Carter, Spencer; Brisson, Jerry; Roth, Sharin E

    2015-10-01

    Delamanid (OPC-67683) is a novel nitro-dihydroimidazo-oxazole derivative that is being developed by Otsuka Pharmaceutical Co., Ltd., Japan (referred to as Otsuka hereafter) for the treatment of tuberculosis (TB). An ultra-high performance liquid chromatographic-tandem mass spectrometry (UHPLC-MS/MS) method has been developed and validated for the determination of OPC-67683 and its eight metabolites, DM-6704, DM-6705, DM-6706, DM-6717, DM-6718, DM-6720, DM-6721 and DM-6722 in human plasma to support regulated clinical development. During method development several technical challenges such as poor chromatography, separation of structural isomers, conversion of the analytes, instability in matrix and long cycle time were encountered and overcome. A protein precipitation extraction (PPE) was used to extract plasma samples (50μL) and the resulting extracts were analyzed using reversed phase UHPLC-MS/MS with a electrospray (ESI) and selected reaction monitoring (SRM). The method was fully validated over the calibration curve range of 1.00-500ng/mL for all nine analytes with linear regression and 1/x(2) weighting according to regulatory guidance for bioanalysis. Based on three inter-day precision and accuracy runs, the between-run % relative standard deviation (RSD) for all nine analytes varied from 0.0 to 11.9% and the accuracy ranged from 92.7% to 102.5% of nominal at all quality controls (QC) concentrations, including the lower limit of quantitation (LLOQ) QC at 1.00ng/mL. The extraction recovery of OPC-67683 and its eight metabolites were above 95%. Various short term and long term solution and matrix stability were established including the stability of OPC-67683 and its eight metabolites in human plasma for 708 days at -70°C. Although this method has been used to support regulated clinic studies during the last decade and over ten thousand samples have been analyzed, this is the first time that the method development process and validation data have been published. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  12. Quantitative Profiling of Endogenous Fat-Soluble Vitamins and Carotenoids in Human Plasma Using an Improved UHPSFC-ESI-MS Interface.

    PubMed

    Petruzziello, Filomena; Grand-Guillaume Perrenoud, Alexandre; Thorimbert, Anita; Fogwill, Michael; Rezzi, Serge

    2017-07-18

    Analytical solutions enabling the quantification of circulating levels of liposoluble micronutrients such as vitamins and carotenoids are currently limited to either single or a reduced panel of analytes. The requirement to use multiple approaches hampers the investigation of the biological variability on a large number of samples in a time and cost efficient manner. With the goal to develop high-throughput and robust quantitative methods for the profiling of micronutrients in human plasma, we introduce a novel, validated workflow for the determination of 14 fat-soluble vitamins and carotenoids in a single run. Automated supported liquid extraction was optimized and implemented to simultaneously parallelize 48 samples in 1 h, and the analytes were measured using ultrahigh-performance supercritical fluid chromatography coupled to tandem mass spectrometry in less than 8 min. An improved mass spectrometry interface hardware was built up to minimize the post-decompression volume and to allow better control of the chromatographic effluent density on its route toward and into the ion source. In addition, a specific make-up solvent condition was developed to ensure both analytes and matrix constituents solubility after mobile phase decompression. The optimized interface resulted in improved spray plume stability and conserved matrix compounds solubility leading to enhanced hyphenation robustness while ensuring both suitable analytical repeatability and improved the detection sensitivity. The overall developed methodology gives recoveries within 85-115%, as well as within and between-day coefficient of variation of 2 and 14%, respectively.

  13. A QC approach to the determination of day-to-day reproducibility and robustness of LC-MS methods for global metabolite profiling in metabonomics/metabolomics.

    PubMed

    Gika, Helen G; Theodoridis, Georgios A; Earll, Mark; Wilson, Ian D

    2012-09-01

    An approach to the determination of day-to-day analytical robustness of LC-MS-based methods for global metabolic profiling using a pooled QC sample is presented for the evaluation of metabonomic/metabolomic data. A set of 60 urine samples were repeatedly analyzed on five different days and the day-to-day reproducibility of the data obtained was determined. Multivariate statistical analysis was performed with the aim of evaluating variability and selected peaks were assessed and validated in terms of retention time stability, mass accuracy and intensity. The methodology enables the repeatability/reproducibility of extended analytical runs in large-scale studies to be determined, allowing the elimination of analytical (as opposed to biological) variability, in order to discover true patterns and correlations within the data. The day-to-day variability of the data revealed by this process suggested that, for this particular system, 3 days continuous operation was possible without the need for maintenance and cleaning. Variation was generally based on signal intensity changes over the 7-day period of the study, and was mainly a result of source contamination.

  14. Coupling photochemical reaction detection based on singlet oxygen sensitization to capillary electrochromatography

    PubMed

    Dickson; Odom; Ducheneaux; Murray; Milofsky

    2000-07-15

    Despite the impressive separation efficiency afforded by capillary electrochromatography (CEC), the detection of UV-absorbing compounds following separation in capillary dimensions remains limited by the short path length (5-75 microm) through the column. Moreover, analytes that are poor chromophores present an additional challenge with respect to sensitive detection in CEC. This paper illustrates a new photochemical reaction detection scheme for CEC that takes advantage of the catalytic nature of type II photooxidation reactions. The sensitive detection scheme is selective toward molecules capable of photosensitizing the formation of singlet molecular oxygen (1O2). Following separation by CEC, UV-absorbing analytes promote groundstate 3O2 to an excited state (1O2) which reacts rapidly with tert-butyl-3,4,5-trimethylpyrrolecarboxylate, which is added to the running buffer. Detection is based on the loss of pyrrole. The reaction is catalytic in nature since one analyte molecule may absorb light many times, producing large amounts of 1O2. The detection limit for 9-acetylanthracene, following separation by CEC, is approximately 6 x 10(-9) M (S/N = 3). Optimization of the factors effecting the S/N for four model compounds is discussed.

  15. Green analytical chemistry introduction to chloropropanols determination at no economic and analytical performance costs?

    PubMed

    Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek

    2016-01-15

    In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Surface hopping, transition state theory and decoherence. I. Scattering theory and time-reversibility

    NASA Astrophysics Data System (ADS)

    Jain, Amber; Herman, Michael F.; Ouyang, Wenjun; Subotnik, Joseph E.

    2015-10-01

    We provide an in-depth investigation of transmission coefficients as computed using the augmented-fewest switches surface hopping algorithm in the low energy regime. Empirically, microscopic reversibility is shown to hold approximately. Furthermore, we show that, in some circumstances, including decoherence on top of surface hopping calculations can help recover (as opposed to destroy) oscillations in the transmission coefficient as a function of energy; these oscillations can be studied analytically with semiclassical scattering theory. Finally, in the spirit of transition state theory, we also show that transmission coefficients can be calculated rather accurately starting from the curve crossing point and running trajectories forwards and backwards.

  17. The frequency of very young galaxies in the local Universe: I. A test for galaxy formation and cosmological models

    NASA Astrophysics Data System (ADS)

    Tweed, D. P.; Mamon, G. A.; Thuan, T. X.; Cattaneo, A.; Dekel, A.; Menci, N.; Calura, F.; Silk, J.

    2018-06-01

    In the local Universe, the existence of very young galaxies (VYGs), having formed at least half their stellar mass in the last 1 Gyr, is debated. We predict the present-day fraction of VYGs among central galaxies as a function of galaxy stellar mass. For this, we apply to high mass resolution Monte Carlo halo merger trees (MCHMTs) three (one) analytical models of galaxy formation, where the ratio of stellar to halo mass (mass growth rate) is a function of halo mass and redshift. Galaxy merging is delayed until orbital decay by dynamical friction. With starbursts associated with halo mergers, our models predict typically 1 per cent of VYGs up to galaxy masses of m = 1010 M⊙, falling rapidly at higher masses, and VYGs are usually associated with recent major mergers of their haloes. Without these starbursts, two of the models have VYG fractions reduced by 1 or 2 dex at low or intermediate stellar masses, and VYGs are rarely associated with major halo mergers. In comparison, the state-of-the-art semi-analytical model (SAM) of Henriques et al. produces only 0.01 per cent of VYGs at intermediate masses. Finally, the Menci et al. SAM run on MCHMTs with Warm Dark Matter cosmology generates 10 times more VYGs at m < 108 M⊙ than when run with Cold Dark Matter. The wide range in these VYG fractions illustrates the usefulness of VYGs to constrain both galaxy formation and cosmological models.

  18. Customisation of the exome data analysis pipeline using a combinatorial approach.

    PubMed

    Pattnaik, Swetansu; Vaidyanathan, Srividya; Pooja, Durgad G; Deepak, Sa; Panda, Binay

    2012-01-01

    The advent of next generation sequencing (NGS) technologies have revolutionised the way biologists produce, analyse and interpret data. Although NGS platforms provide a cost-effective way to discover genome-wide variants from a single experiment, variants discovered by NGS need follow up validation due to the high error rates associated with various sequencing chemistries. Recently, whole exome sequencing has been proposed as an affordable option compared to whole genome runs but it still requires follow up validation of all the novel exomic variants. Customarily, a consensus approach is used to overcome the systematic errors inherent to the sequencing technology, alignment and post alignment variant detection algorithms. However, the aforementioned approach warrants the use of multiple sequencing chemistry, multiple alignment tools, multiple variant callers which may not be viable in terms of time and money for individual investigators with limited informatics know-how. Biologists often lack the requisite training to deal with the huge amount of data produced by NGS runs and face difficulty in choosing from the list of freely available analytical tools for NGS data analysis. Hence, there is a need to customise the NGS data analysis pipeline to preferentially retain true variants by minimising the incidence of false positives and make the choice of right analytical tools easier. To this end, we have sampled different freely available tools used at the alignment and post alignment stage suggesting the use of the most suitable combination determined by a simple framework of pre-existing metrics to create significant datasets.

  19. Determination of aliskiren in human serum quantities by HPLC-tandem mass spectrometry appropriate for pediatric trials.

    PubMed

    Burckhardt, Bjoern B; Ramusovic, Sergej; Tins, Jutta; Laeer, Stephanie

    2013-04-01

    The orally active direct renin inhibitor aliskiren is approved for the treatment of essential hypertension in adults. Analytical methods utilized in clinical studies on efficacy and safety have not been fully described in the literature but need a large sample volume ranging from 200 to 700 μL, rendering them unsuitable particularly for pediatric applications. In the assay presented only 100 μL of serum is needed for mixed-mode solid-phase extraction. The chromatographic separation was performed on Xselect(TM) C18 CSH columns with mobile phase consisting of methanol-water-formic acid (75:25:0.005, v/v/v) and a flow rate of 0.4 mL/min. Running in positive electrospray ionization and multiple reaction monitoring the mass spectrometer was set to analyze precursor ion 552.2 m/z [M + H](+) to product ion 436.2 m/z during a total run time of 5 min. The method covers a linear calibration range of 0.146-1200 ng/mL. Intra-run and inter-run precisions were 0.4-7.2 and 0.6-12.9%. Mean recovery was at least 89%. Selectivity, accuracy and stability results comply with current European Medicines Agency and Food and Drug Administration guidelines. This successfully validated LC-MS/MS method with a wide linear calibration range requiring small serum amounts is suitable for pharmacokinetic investigations of aliskiren in pediatrics, adults and the elderly. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Optimization of detection conditions and single-laboratory validation of a multiresidue method for the determination of 135 pesticides and 25 organic pollutants in grapes and wine by gas chromatography time-of-flight mass spectrometry.

    PubMed

    Dasgupta, Soma; Banerjee, Kaushik; Dhumal, Kondiba N; Adsule, Pandurang G

    2011-01-01

    This paper describes single-laboratory validation of a multiresidue method for the determination of 135 pesticides, 12 dioxin-like polychlorinated biphenyls, 12 polyaromatic hydrocarbons, and bisphenol A in grapes and wine by GC/time-of-flight MS in a total run time of 48 min. The method is based on extraction with ethyl acetate in a sample-to-solvent ratio of 1:1, followed by selective dispersive SPE cleanup for grapes and wine. The GC/MS conditions were optimized for the chromatographic separation and to achieve highest S/N for all 160 target analytes, including the temperature-sensitive compounds, like captan and captafol, that are prone to degradation during analysis. An average recovery of 80-120% with RSD < 10% could be attained for all analytes except 17, for which the average recoveries were 70-80%. LOQ ranged within 10-50 ng/g, with < 25% expanded uncertainties, for 155 compounds in grapes and 151 in wine. In the incurred grape and wine samples, the residues of buprofezin, chlorpyriphos, metalaxyl, and myclobutanil were detected, with an RSD of < 5% (n = 6); the results were statistically similar to previously reported validated methods.

  1. Analytic barrage attack model. Final report, January 1986-January 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.

    An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less

  2. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.

  3. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    NASA Technical Reports Server (NTRS)

    Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  4. A computer-based maintenance reminder and record-keeping system for clinical laboratories.

    PubMed

    Roberts, B I; Mathews, C L; Walton, C J; Frazier, G

    1982-09-01

    "Maintenance" is all the activity an organization devotes to keeping instruments within performance specifications to assure accurate and precise operation. The increasing use of complex analytical instruments as "workhorses" in clinical laboratories requires more maintenance awareness by laboratory personnel. Record-keeping systems that document maintenance completion and that should prompt the continued performance of maintenance tasks have not kept up with instrumentation development. We report here a computer-based record-keeping and reminder system that lists weekly the maintenance items due for each work station in the laboratory, including the time required to complete each item. Written in BASIC, the system uses a DATABOSS data base management system running on a time-shared Digital Equipment Corporation PDP 11/60 computer with a RSTS V 7.0 operating system.

  5. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A generalized chemistry version of SPARK

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.

    1988-01-01

    An extension of the reacting H2-air computer code SPARK is presented, which enables the code to be used on any reacting flow problem. Routines are developed calculating in a general fashion, the reaction rates, and chemical Jacobians of any reacting system. In addition, an equilibrium routine is added so that the code will have frozen, finite rate, and equilibrium capabilities. The reaction rate for the species is determined from the law of mass action using Arrhenius expressions for the rate constants. The Jacobian routines are determined by numerically or analytically differentiating the law of mass action for each species. The equilibrium routine is based on a Gibbs free energy minimization routine. The routines are written in FORTRAN 77, with special consideration given to vectorization. Run times for the generalized routines are generally 20 percent slower than reaction specific routines. The numerical efficiency of the generalized analytical Jacobian, however, is nearly 300 percent better than the reaction specific numerical Jacobian used in SPARK.

  7. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  8. The properties of and analytical methods for detection of LiOH and Li2CO3

    NASA Technical Reports Server (NTRS)

    Selvaduray, Guna

    1991-01-01

    Lithium hydroxide (LiOH) is used as a CO2 absorbent in the Shuttle Extravehicular Mobility Unit (EMU) Portable Life Support System (PLSS). The first objective was to survey parameters that may be used to indicate conversion of LiOH to Li2CO3, and compile a list of all possible properties, including physical, chemical, structural, and electrical, that may serve to indicate the occurrence of reaction. These properties were compiled for the reactant (LiOH), the intermediate monohydrate compound (LiOH.H2O), and the final product (Li2CO3). The second objective was to survey measurement and analytical techniques which may be used in conjunction with each of the properties identified above, to determine the extent of conversion of LiOH to Li2CO3. Both real-time and post-run techniques were of interest. The techniques were also evaluated in terms of complexity, technology readiness, materials/equipment availability, and cost, where possible.

  9. Clonazepam quantification in human plasma by high-performance liquid chromatography coupled with electrospray tandem mass spectrometry in a bioequivalence study.

    PubMed

    Cavedal, Luiz E; Mendes, Fabiana D; Domingues, Claudia C; Patni, Anil K; Monif, Tausif; Reyar, Simrit; Pereira, Alberto Dos S; Mendes, Gustavo D; De Nucci, Gilberto

    2007-01-01

    A rapid, sensitive and specific method for quantifying clonazepam in human plasma using diazepam as the internal standard (IS) is described. The analyte and the IS were extracted from plasma by liquid-liquid extraction using a hexane/diethylether (20 : 80, v/v) solution. The extracts were analysed by high-performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS-MS). Chromatography was performed on a Jones Genesis C8 4 microm analytical column (100 x 2.1 mm i.d.). The method had a chromatographic run time of 3.0 min and a linear calibration curve over the range 0.5-50 ng/ml (r2 > 0.9965). The limit of quantification was 0.5 ng/ml. This HPLC/MS/MS procedure was used to assess the bioequivalence of two clonazepam 2 mg tablet formulations (clonazepam test formulation from Ranbaxy Laboratories Ltd and Rivotril from Roche Laboratórios Ltda as standard reference formulation). Copyright 2006 John Wiley & Sons, Ltd.

  10. Renormalization Group scale-setting in astrophysical systems

    NASA Astrophysics Data System (ADS)

    Domazet, Silvije; Štefančić, Hrvoje

    2011-09-01

    A more general scale-setting procedure for General Relativity with Renormalization Group corrections is proposed. Theoretical aspects of the scale-setting procedure and the interpretation of the Renormalization Group running scale are discussed. The procedure is elaborated for several highly symmetric systems with matter in the form of an ideal fluid and for two models of running of the Newton coupling and the cosmological term. For a static spherically symmetric system with the matter obeying the polytropic equation of state the running scale-setting is performed analytically. The obtained result for the running scale matches the Ansatz introduced in a recent paper by Rodrigues, Letelier and Shapiro which provides an excellent explanation of rotation curves for a number of galaxies. A systematic explanation of the galaxy rotation curves using the scale-setting procedure introduced in this Letter is identified as an important future goal.

  11. Real-Time and Retrospective Health-Analytics-as-a-Service: A Novel Framework.

    PubMed

    Khazaei, Hamzeh; McGregor, Carolyn; Eklund, J Mikael; El-Khatib, Khalil

    2015-11-18

    Analytics-as-a-service (AaaS) is one of the latest provisions emerging from the cloud services family. Utilizing this paradigm of computing in health informatics will benefit patients, care providers, and governments significantly. This work is a novel approach to realize health analytics as services in critical care units in particular. To design, implement, evaluate, and deploy an extendable big-data compatible framework for health-analytics-as-a-service that offers both real-time and retrospective analysis. We present a novel framework that can realize health data analytics-as-a-service. The framework is flexible and configurable for different scenarios by utilizing the latest technologies and best practices for data acquisition, transformation, storage, analytics, knowledge extraction, and visualization. We have instantiated the proposed method, through the Artemis project, that is, a customization of the framework for live monitoring and retrospective research on premature babies and ill term infants in neonatal intensive care units (NICUs). We demonstrated the proposed framework in this paper for monitoring NICUs and refer to it as the Artemis-In-Cloud (Artemis-IC) project. A pilot of Artemis has been deployed in the SickKids hospital NICU. By infusing the output of this pilot set up to an analytical model, we predict important performance measures for the final deployment of Artemis-IC. This process can be carried out for other hospitals following the same steps with minimal effort. SickKids' NICU has 36 beds and can classify the patients generally into 5 different types including surgical and premature babies. The arrival rate is estimated as 4.5 patients per day, and the average length of stay was calculated as 16 days. Mean number of medical monitoring algorithms per patient is 9, which renders 311 live algorithms for the whole NICU running on the framework. The memory and computation power required for Artemis-IC to handle the SickKids NICU will be 32 GB and 16 CPU cores, respectively. The required amount of storage was estimated as 8.6 TB per year. There will always be 34.9 patients in SickKids NICU on average. Currently, 46% of patients cannot get admitted to SickKids NICU due to lack of resources. By increasing the capacity to 90 beds, all patients can be accommodated. For such a provisioning, Artemis-IC will need 16 TB of storage per year, 55 GB of memory, and 28 CPU cores. Our contributions in this work relate to a cloud architecture for the analysis of physiological data for clinical decisions support for tertiary care use. We demonstrate how to size the equipment needed in the cloud for that architecture based on a very realistic assessment of the patient characteristics and the associated clinical decision support algorithms that would be required to run for those patients. We show the principle of how this could be performed and furthermore that it can be replicated for any critical care setting within a tertiary institution.

  12. Real-Time and Retrospective Health-Analytics-as-a-Service: A Novel Framework

    PubMed Central

    McGregor, Carolyn; Eklund, J Mikael; El-Khatib, Khalil

    2015-01-01

    Background Analytics-as-a-service (AaaS) is one of the latest provisions emerging from the cloud services family. Utilizing this paradigm of computing in health informatics will benefit patients, care providers, and governments significantly. This work is a novel approach to realize health analytics as services in critical care units in particular. Objective To design, implement, evaluate, and deploy an extendable big-data compatible framework for health-analytics-as-a-service that offers both real-time and retrospective analysis. Methods We present a novel framework that can realize health data analytics-as-a-service. The framework is flexible and configurable for different scenarios by utilizing the latest technologies and best practices for data acquisition, transformation, storage, analytics, knowledge extraction, and visualization. We have instantiated the proposed method, through the Artemis project, that is, a customization of the framework for live monitoring and retrospective research on premature babies and ill term infants in neonatal intensive care units (NICUs). Results We demonstrated the proposed framework in this paper for monitoring NICUs and refer to it as the Artemis-In-Cloud (Artemis-IC) project. A pilot of Artemis has been deployed in the SickKids hospital NICU. By infusing the output of this pilot set up to an analytical model, we predict important performance measures for the final deployment of Artemis-IC. This process can be carried out for other hospitals following the same steps with minimal effort. SickKids’ NICU has 36 beds and can classify the patients generally into 5 different types including surgical and premature babies. The arrival rate is estimated as 4.5 patients per day, and the average length of stay was calculated as 16 days. Mean number of medical monitoring algorithms per patient is 9, which renders 311 live algorithms for the whole NICU running on the framework. The memory and computation power required for Artemis-IC to handle the SickKids NICU will be 32 GB and 16 CPU cores, respectively. The required amount of storage was estimated as 8.6 TB per year. There will always be 34.9 patients in SickKids NICU on average. Currently, 46% of patients cannot get admitted to SickKids NICU due to lack of resources. By increasing the capacity to 90 beds, all patients can be accommodated. For such a provisioning, Artemis-IC will need 16 TB of storage per year, 55 GB of memory, and 28 CPU cores. Conclusions Our contributions in this work relate to a cloud architecture for the analysis of physiological data for clinical decisions support for tertiary care use. We demonstrate how to size the equipment needed in the cloud for that architecture based on a very realistic assessment of the patient characteristics and the associated clinical decision support algorithms that would be required to run for those patients. We show the principle of how this could be performed and furthermore that it can be replicated for any critical care setting within a tertiary institution. PMID:26582268

  13. Development and validation of a multiplex real-time PCR method to simultaneously detect 47 targets for the identification of genetically modified organisms.

    PubMed

    Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong

    2013-08-01

    Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.

  14. Nevirapine quantification in human plasma by high-performance liquid chromatography coupled to electrospray tandem mass spectrometry. Application to bioequivalence study.

    PubMed

    Laurito, Tiago L; Santagada, Vincenzo; Caliendo, Giuseppe; Oliveira, Celso H; Barrientos-Astigarraga, Rafael E; De Nucci, Gilberto

    2002-04-01

    A rapid, sensitive and specific method to quantify nevirapine in human plasma using dibenzepine as the internal standard (IS) was developed and validated. The method employed a liquid-liquid extraction. The analyte and the IS were chromatographed on a C(18) analytical column, (150 x 4.6 mm i.d. 4 microm) and analyzed by tandem mass spectrometry in the multiple reaction monitoring mode. The method had a chromatographic run time of 5.0 min and a linear calibration curve over the range 10-5000 ng ml(-1) (r(2) > 0.9970). The between-run precision, based on the relative standard deviation for replicate quality controls was 1.3% (30 ng ml(-1)), 2.8% (300 ng ml(-1)) and 3.6% (3000 ng ml(-1)). The between-run accuracy was 4.0, 7.0 and 6.2% for the above-mentioned concentrations, respectively. This method was employed in a bioequivalence study of two nevirapine tablet formulations (Nevirapina from Far-Manguinhos, Brazil, as a test formulation, and Viramune from Boehringer Ingelheim do Brasil Química e Farmacêutica, as a reference formulation) in 25 healthy volunteers of both sexes who received a single 200 mg dose of each formulation. The study was conducted using an open, randomized, two-period crossover design with a 3 week washout interval. The 90% confidence interval (CI) of the individual ratio geometric mean for Nevirapina/Viramune was 96.4-104.5% for AUC((0-last)), 91.4-105.1% for AUC((0-infinity)) and 95.3-111.6% for C(max) (AUC = area under the curve; C(max) = peak plasma concentration). Since both 90% CI for AUC((0-last)) and AUC((0-infinity)) and C(max) were included in the 80-125% interval proposed by the US Food and Drug Administration, Nevirapina was considered bioequivalent to Viramune according to both the rate and extent of absorption. Copyright 2002 John Wiley & Sons, Ltd.

  15. Colloidal Mechanisms of Gold Nanoparticle Loss in Asymmetric Flow Field-Flow Fractionation.

    PubMed

    Jochem, Aljosha-Rakim; Ankah, Genesis Ngwa; Meyer, Lars-Arne; Elsenberg, Stephan; Johann, Christoph; Kraus, Tobias

    2016-10-07

    Flow field-flow fractionation is a powerful method for the analysis of nanoparticle size distributions, but its widespread use has been hampered by large analyte losses, especially of metal nanoparticles. Here, we report on the colloidal mechanisms underlying the losses. We systematically studied gold nanoparticles (AuNPs) during asymmetrical flow field-flow fractionation (AF4) by systematic variation of the particle properties and the eluent composition. Recoveries of AuNPs (core diameter 12 nm) stabilized by citrate or polyethylene glycol (PEG) at different ionic strengths were determined. We used online UV-vis detection and off-line elementary analysis to follow particle losses during full analysis runs, runs without cross-flow, and runs with parts of the instrument bypassed. The combination allowed us to calculate relative and absolute analyte losses at different stages of the analytic protocol. We found different loss mechanisms depending on the ligand. Citrate-stabilized particles degraded during analysis and suffered large losses (up to 74%). PEG-stabilized particles had smaller relative losses at moderate ionic strengths (1-20%) that depended on PEG length. Long PEGs at higher ionic strengths (≥5 mM) caused particle loss due to bridging adsorption at the membrane. Bulk agglomeration was not a relevant loss mechanism at low ionic strengths ≤5 mM for any of the studied particles. An unexpectedly large fraction of particles was lost at tubing and other internal surfaces. We propose that the colloidal mechanisms observed here are relevant loss mechanisms in many particle analysis protocols and discuss strategies to avoid them.

  16. Determination of short chain carboxylic acids in vegetable oils and fats using ion exclusion chromatography electrospray ionization mass spectrometry.

    PubMed

    Viidanoja, Jyrki

    2015-02-27

    A new method for quantification of short chain C1-C6 carboxylic acids in vegetable oils and fats by employing Liquid Chromatography Mass Spectrometry (LC-MS) has been developed. The method requires minor sample preparation and applies non-conventional Electrospray Ionization (ESI) liquid phase chemistry. Samples are first dissolved in chloroform and then extracted using water that has been spiked with stable isotope labeled internal standards that are used for signal normalization and absolute quantification of selected acids. The analytes are separated using Ion Exclusion Chromatography (IEC) and detected with Electrospray Ionization Mass Spectrometry (ESI-MS) as deprotonated molecules. Prior to ionization the eluent that contains hydrochloric acid is modified post-column to ensure good ionization efficiency of the analytes. The averaged within run precision and between run precision were generally lower than 8%. The accuracy was between 85 and 115% for most of the analytes. The Lower Limit of Quantification (LLOQ) ranged from 0.006 to 7mg/kg. It is shown that this method offers good selectivity in cases where UV detection fails to produce reliable results. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Development of liquid chromatography high resolution mass spectrometry strategies for the screening of complex organic matter: Application to astrophysical simulated materials.

    PubMed

    Eddhif, Balkis; Allavena, Audrey; Liu, Sylvie; Ribette, Thomas; Abou Mrad, Ninette; Chiavassa, Thierry; d'Hendecourt, Louis Le Sergeant; Sternberg, Robert; Danger, Gregoire; Geffroy-Rodier, Claude; Poinot, Pauline

    2018-03-01

    The present work aims at developing two LC-HRMS setups for the screening of organic matter in astrophysical samples. Their analytical development has been demonstrated on a 100-µg residue coming from the photo-thermo chemical processing of a cometary ice analog produced in laboratory. The first 1D-LC-HRMS setup combines a serially coupled columns configuration with HRMS detection. It has allowed to discriminate among different chemical families (amino acids, sugars, nucleobases and oligopeptides) in only one chromatographic run without neither a priori acid hydrolysis nor chemical derivatisation. The second setup is a dual-LC configuration which connects a series of trapping columns with analytical reverse-phase columns. By coupling on-line these two distinct LC units with a HRMS detection, high mass compounds (350

  18. Evaluation of FUS-2000 urine analyzer: analytical properties and particle recognition.

    PubMed

    Beňovská, Miroslava; Wiewiorka, Ondřej; Pinkavová, Jana

    This study evaluates the performance of microscopic part of a hybrid analyzer FUS-2000 (Dirui Industrial Co., Changchun, China), its analytical properties and particle recognition. The evaluation of trueness, repeatability, detection limit, carry-over, linearity range and analytical stability was performed according to Dirui protocol guidelines designed by Dirui Company to guarantee the quality of the instrument. Trueness for low, medium and high-value concentrations was calculated with bias of 15.5, 4.7 and -6.6%, respectively. Detection limit of 5 Ery/μl was confirmed. Coefficient of variation of 11.0, 5.2 and 3.8% was measured for within-run repeatability of low, medium and high concentration. Between-run repeatability for daily quality control had coefficient of variation of 3.0%. Carry-over did not exceed 0.05%. Linearity was confirmed for range of 0-16,000 particles/μl (R 2  = 0.9997). The analytical stability had coefficient of variation of 4.3%. Out of 1258 analyzed urine samples, 362 positive were subjected to light microscopy urine sediment analysis and compared to the analyzer results. Cohen's kappa coefficients were calculated to express the concordance. Squared kappa coefficient was 0.927 (red blood cells), 0.888 (white blood cells), 0.908 (squamous epithelia), 0.634 (transitional epithelia), 0.628 (hyaline casts), 0.843 (granular casts) and 0.623 (bacteria). Single kappa coefficients were 0.885 (yeasts) and 0.756 (crystals), respectively. Aforementioned results show good analytical performance of the analyzer and tight agreement with light microscopy of urine sediment.

  19. Development and application of GC-MS method for monitoring of long-term exposure to the pesticide cypermethrin.

    PubMed

    Kavvalakis, Matthaios P; Tzatzarakis, Manolis N; Alegakis, Athanasios K; Vynias, Dionysios; Tsakalof, Andreas K; Tsatsakis, Aristidis M

    2014-06-01

    Cypermethrin (CPMN) is a synthetic pyrethroid used as an insecticide in large-scale commercial agricultural applications as well as for domestic purposes. In the present study a gas chromatography-mass spectrometry (GC-MS) based method was developed and validated for the quantitation of CPMN metabolites, 3-phenoxybenzoic acid (3-PBA) and cis- and trans- 3-(2,2-dichlorovinyl)-2,2-dimethyl-1-cyclopropane (cis- and trans- Cl2 CA). The developed method was applied for the monitoring of CPMN metabolites in hair of laboratory animals (rabbits) intentionally exposed per os to CPMN at 40 (low dose) and 80 (high dose) mg/kg weight/day for 16 weeks. The analytical method comprises three main steps: isolation of analytes from hair, analytes derivatization, and subsequent instrumental analysis by GC-MS. The limits of detection ensured by the method are 4.0, 3.9 and 1.0 pg/mg hair for cis-Cl2 CA, trans-Cl2 CA and 3-PBA, respectively. The instrument responce is linear (r(2)  > 0.99) in the investigated concentrations range from 25 to 1000 pg/mg. With and between-run precision as well as accuracy were estimated and found satisfactory. Analytes were efficiently isolated by solid-liquid extraction from hair with recoveries greater than 84.8% for cis-Cl2 CA, 87.2% for trans-Cl2 CA and 96.4% for 3-PBA. Rabbit's hair showed increasing levels for all metabolites (metabolites accumulation in a time and dose dependent manner) over time and in a dose-dependent manner. The developed experimental procedure could be used for biomonitoring of population exposure to CPMN. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Non-normal dynamics and positive feedback between motion and sensation boosts run-and-tumble navigation.

    NASA Astrophysics Data System (ADS)

    Long, Junjiajia; Zucker, Steven W.; Emonet, Thierry

    The capability to navigate environmental gradients is of critical importance for survival. Countless organisms (microbes, human cells, worms, larvae, and insects) as well as human-made robots use a run-and-tumble strategy to do so. The classical drawback of this approach is that runs in the wrong direction are wasteful. We show analytically that organisms can overcome this fundamental limitation by exploiting the non-normal dynamics and intrinsic nonlinearities inherent to the positive feedback between motion and sensation. Most importantly, this nonlinear amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a ``ratchet-like'' gradient climbing behavior with drift speeds that can approach half the maximum run speed of the organism. By extending the theoretical study of run-and-tumble navigation into the non-mean-field, nonlinear, and non-normal domains, our results provide a new level of understanding about this basic strategy. We thank Yale HPC, NIGMS 1R01GM106189, and the Allen Distinguished Investigator Program through The Paul G. Allen Frontiers Group for support.

  1. A power function profile of a ski jumping in-run hill.

    PubMed

    Zanevskyy, Ihor

    2011-01-01

    The aim of the research was to find a function of the curvilinear segment profile which could make possible to avoid an instantaneous increasing of a curvature and to replace a circle arc segment on the in-run of a ski jump without any correction of the angles of inclination and the length of the straight-line segments. The methods of analytical geometry and trigonometry were used to calculate an optimal in-run hill profile. There were two fundamental conditions of the model: smooth borders between a curvilinear segment and straight-line segments of an in-run hill and concave of the curvilinear segment. Within the framework of this model, the problem has been solved with a reasonable precision. Four functions of a curvilinear segment profile of the in-run hill were investigated: circle arc, inclined quadratic parabola, inclined cubic parabola, and power function. The application of a power function to the in-run profile satisfies equal conditions for replacing a circle arc segment. Geometrical parameters of 38 modern ski jumps were investigated using the methods proposed.

  2. Real-time orbit estimation for ATS-6 from redundant attitude sensors

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.

    1975-01-01

    A program installed in the ATSOCC on-line computer operates with attitude sensor data to produce a smoothed real-time orbit estimate. This estimate is obtained from a Kalman filter which enables the estimate to be maintained in the absence of T/M data. The results are described of analytical and numerical investigations into the sensitivity of Control Center output to the position errors resulting from the real-time estimation. The results of the numerical investigation, which used several segments of ATS-6 data gathered during the Sensor Data Acquisition run on August 19, 1974, show that the implemented system can achieve absolute position determination with an error of about 100 km, implying pointing errors of less than 0.2 deg in latitude and longitude. This compares very favorably with ATS-6 specifications of approximately 0.5 deg in latitude-longitude.

  3. Maximising profits for an EPQ model with unreliable machine and rework of random defective items

    NASA Astrophysics Data System (ADS)

    Pal, Brojeswar; Sankar Sana, Shib; Chaudhuri, Kripasindhu

    2013-03-01

    This article deals with an economic production quantity (EPQ) model in an imperfect production system. The production system may undergo in 'out-of-control' state from 'in-control' state, after a certain time that follows a probability density function. The density function varies with reliability of the machinery system that may be controlled by new technologies, investing more costs. The defective items produced in 'out-of-control' state are reworked at a cost just after the regular production time. Occurrence of the 'out-of-control' state during or after regular production-run time is analysed and also graphically illustrated separately. Finally, an expected profit function regarding the inventory cost, unit production cost and selling price is maximised analytically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. Two numerical examples are considered to test the model and one of them is illustrated graphically.

  4. Immunochemistry for high-throughput screening of human exhaled breath condensate (EBC) media: implementation of automated Quanterix SIMOA instrumentation.

    PubMed

    Pleil, Joachim D; Angrish, Michelle M; Madden, Michael C

    2015-12-11

    Immunochemistry is an important clinical tool for indicating biological pathways leading towards disease. Standard enzyme-linked immunosorbent assays (ELISA) are labor intensive and lack sensitivity at low-level concentrations. Here we report on emerging technology implementing fully-automated ELISA capable of molecular level detection and describe application to exhaled breath condensate (EBC) samples. The Quanterix SIMOA HD-1 analyzer was evaluated for analytical performance for inflammatory cytokines (IL-6, TNF-α, IL-1β and IL-8). The system was challenged with human EBC representing the most dilute and analytically difficult of the biological media. Calibrations from synthetic samples and spiked EBC showed excellent linearity at trace levels (r(2)  >  0.99). Sensitivities varied by analyte, but were robust from ~0.006 (IL-6) to ~0.01 (TNF-α) pg ml(-1). All analytes demonstrated response suppression when diluted with deionized water and so assay buffer diluent was found to be a better choice. Analytical runs required ~45 min setup time for loading samples, reagents, calibrants, etc., after which the instrument performs without further intervention for up to 288 separate samples. Currently, available kits are limited to single-plex analyses and so sample volumes require adjustments. Sample dilutions should be made with assay diluent to avoid response suppression. Automation performs seamlessly and data are automatically analyzed and reported in spreadsheet format. The internal 5-parameter logistic (pl) calibration model should be supplemented with a linear regression spline at the very lowest analyte levels, (<1.3 pg ml(-1)). The implementation of the automated Quanterix platform was successfully demonstrated using EBC, which poses the greatest challenge to ELISA due to limited sample volumes and low protein levels.

  5. New trends in the analytical determination of emerging contaminants and their transformation products in environmental waters.

    PubMed

    Agüera, Ana; Martínez Bueno, María Jesús; Fernández-Alba, Amadeo R

    2013-06-01

    Since the so-called emerging contaminants were established as a new group of pollutants of environmental concern, a great effort has been devoted to the knowledge of their distribution, fate and effects in the environment. After more than 20 years of work, a significant improvement in knowledge about these contaminants has been achieved, but there is still a large gap of information on the growing number of new potential contaminants that are appearing and especially of their unpredictable transformation products. Although the environmental problem arising from emerging contaminants must be addressed from an interdisciplinary point of view, it is obvious that analytical chemistry plays an important role as the first step of the study, as it allows establishing the presence of chemicals in the environment, estimate their concentration levels, identify sources and determine their degradation pathways. These tasks involve serious difficulties requiring different analytical solutions adjusted to purpose. Thus, the complexity of the matrices requires highly selective analytical methods; the large number and variety of compounds potentially present in the samples demands the application of wide scope methods; the low concentrations at which these contaminants are present in the samples require a high detection sensitivity, and high demands on the confirmation and high structural information are needed for the characterisation of unknowns. New developments on analytical instrumentation have been applied to solve these difficulties. Furthermore and not less important has been the development of new specific software packages intended for data acquisition and, in particular, for post-run analysis. Thus, the use of sophisticated software tools has allowed successful screening analysis, determining several hundreds of analytes, and assisted in the structural elucidation of unknown compounds in a timely manner.

  6. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics

    PubMed Central

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-01-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379

  7. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    PubMed

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  8. Speciation analysis of mercury by solid-phase microextraction and multicapillary gas chromatography hyphenated to inductively coupled plasma-time-of-flight-mass spectrometry.

    PubMed

    Jitaru, Petru; Adams, Freddy C

    2004-11-05

    This paper reports the development of an analytical approach for speciation analysis of mercury at ultra-trace levels on the basis of solid-phase microextraction and multicapillary gas chromatography hyphenated to inductively coupled plasma-time-of-flight mass spectrometry. Headspace solid-phase microextraction with a carboxen/polydimethylsyloxane fiber is used for extraction/preconcentration of mercury species after derivatization with sodium tetraethylborate and subsequent volatilization. Isothermal separation of methylmercury (MeHg), inorganic mercury (Hg2+) and propylmercury (PrHg) used as internal standard is achieved within a chromatographic run below 45 s without the introduction of spectral skew. Method detection limits (3 x standard deviation criteria) calculated for 10 successive injections of the analytical reagent blank are 0.027 pg g(-1) (as metal) for MeHg and 0.27 pg g(-1) for Hg2+. The repeatability (R.S.D., %) is 3.3% for MeHg and 3.8% for Hg2+ for 10 successive injections of a standard mixture of 10pg. The method accuracy for MeHg and total mercury is validated through the analysis of marine and estuarine sediment reference materials. A comparison of the sediment data with those obtained by a purge-and-trap injection (PTI) method is also addressed. The analytical procedure is illustrated with some results for the ultra-trace level analysis of ice from Antarctica for which the accuracy is assessed by spike recovery experiments.

  9. Determination of the design space of the HPLC analysis of water-soluble vitamins.

    PubMed

    Wagdy, Hebatallah A; Hanafi, Rasha S; El-Nashar, Rasha M; Aboul-Enein, Hassan Y

    2013-06-01

    Analysis of water-soluble vitamins has been tremendously approached through the last decades. A multitude of HPLC methods have been reported with a variety of advantages/shortcomings, yet, the design space of HPLC analysis of these vitamins was not defined in any of these reports. As per the food and drug administration (FDA), implementing the quality by design approach for the analysis of commercially available mixtures is hypothesized to enhance the pharmaceutical industry via facilitating the process of analytical method development and approval. This work illustrates a multifactorial optimization of three measured plus seven calculated influential HPLC parameters on the analysis of a mixture containing seven common water-soluble vitamins (B1, B2, B6, B12, C, PABA, and PP). These three measured parameters are gradient time, temperature, and ternary eluent composition (B1/B2) and the seven calculated parameters are flow rate, column length, column internal diameter, dwell volume, extracolumn volume, %B (start), and %B (end). The design is based on 12 experiments in which, examining of the multifactorial effects of these 3 + 7 parameters on the critical resolution and selectivity, was carried out by systematical variation of all these parameters simultaneously. The 12 basic runs were based on two different gradient time each at two different temperatures, repeated at three different ternary eluent compositions (methanol or acetonitrile or a mixture of both). Multidimensional robust regions of high critical R(s) were defined and graphically verified. The optimum method was selected based on the best resolution separation in the shortest run time for a synthetic mixture, followed by application on two pharmaceutical preparations available in the market. The predicted retention times of all peaks were found to be in good match with the virtual ones. In conclusion, the presented report offers an accurate determination of the design space for critical resolution in the analysis of water-soluble vitamins by HPLC, which would help the regulatory authorities to judge the validity of presented analytical methods for approval. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  11. Capillary Electrophoresis-Mass Spectrometry for the Analysis of Heparin Oligosaccharides and Low Molecular Weight Heparin.

    PubMed

    Sun, Xiaojun; Lin, Lei; Liu, Xinyue; Zhang, Fuming; Chi, Lianli; Xia, Qiangwei; Linhardt, Robert J

    2016-02-02

    Heparins, highly sulfated, linear polysaccharides also known as glycosaminoglycans, are among the most challenging biopolymers to analyze. Hyphenated techniques in conjunction with mass spectrometry (MS) offer rapid analysis of complex glycosaminoglycan mixtures, providing detailed structural and quantitative data. Previous analytical approaches have often relied on liquid chromatography (LC)-MS, and some have limitations including long separation times, low resolution of oligosaccharide mixtures, incompatibility of eluents, and often require oligosaccharide derivatization. This study examines the analysis of glycosaminoglycan oligosaccharides using a novel electrokinetic pump-based capillary electrophoresis (CE)-MS interface. CE separation and electrospray were optimized using a volatile ammonium bicarbonate electrolyte and a methanol-formic acid sheath fluid. The online analyses of highly sulfated heparin oligosaccharides, ranging from disaccharides to low molecular weight heparins, were performed within a 10 min time frame, offering an opportunity for higher-throughput analysis. Disaccharide compositional analysis as well as top-down analysis of low molecular weight heparin was demonstrated. Using normal polarity CE separation and positive-ion electrospray ionization MS, excellent run-to-run reproducibility (relative standard deviation of 3.6-5.1% for peak area and 0.2-0.4% for peak migration time) and sensitivity (limit of quantification of 2.0-5.9 ng/mL and limit of detection of 0.6-1.8 ng/mL) could be achieved.

  12. A Lean Six Sigma approach to the improvement of the selenium analysis method.

    PubMed

    Cloete, Bronwyn C; Bester, André

    2012-11-02

    Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL). The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC). Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM) was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any laboratory, and represents both a management discipline, and a standardised approach to problem solving and process optimisation.

  13. Results of continuous monitoring of the performance of rubella virus IgG and hepatitis B virus surface antibody assays using trueness controls in a multicenter trial.

    PubMed

    Kruk, Tamara; Ratnam, Sam; Preiksaitis, Jutta; Lau, Allan; Hatchette, Todd; Horsman, Greg; Van Caeseele, Paul; Timmons, Brian; Tipples, Graham

    2012-10-01

    We conducted a multicenter trial in Canada to assess the value of using trueness controls (TC) for rubella virus IgG and hepatitis B virus surface antibody (anti-HBs) serology to determine test performance across laboratories over time. TC were obtained from a single source with known international units. Seven laboratories using different test systems and kit lots included the TC in routine assay runs of the analytes. TC measurements of 1,095 rubella virus IgG and 1,195 anti-HBs runs were plotted on Levey-Jennings control charts for individual laboratories and analyzed using a multirule quality control (MQC) scheme as well as a single three-standard-deviation (3-SD) rule. All rubella virus IgG TC results were "in control" in only one of the seven laboratories. Among the rest, "out-of-control" results ranged from 5.6% to 10% with an outlier at 20.3% by MQC and from 1.1% to 5.6% with an outlier at 13.4% by the 3-SD rule. All anti-HBs TC results were "in control" in only two laboratories. Among the rest, "out-of-control" results ranged from 3.3% to 7.9% with an outlier at 19.8% by MQC and from 0% to 3.3% with an outlier at 10.5% by the 3-SD rule. In conclusion, through the continuous monitoring of assay performance using TC and quality control rules, our trial detected significant intra- and interlaboratory, test system, and kit lot variations for both analytes. In most cases the assay rejections could be attributable to the laboratories rather than to kit lots. This has implications for routine diagnostic screening and clinical practice guidelines and underscores the value of using an approach as described above for continuous quality improvement in result reporting and harmonization for these analytes.

  14. Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Graß, Tobias

    2012-03-01

    We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.

  15. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  16. Development of a validated liquid chromatographic method for quantification of sorafenib tosylate in the presence of stress-induced degradation products and in biological matrix employing analytical quality by design approach.

    PubMed

    Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder

    2018-05-01

    The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any significant change in the retention time of SFN and degradation products, formed under different stress conditions, ratified selectivity and specificity of the systematically developed method. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Direct high-performance liquid chromatography method with refractometric detection designed for stability studies of treosulfan and its biologically active epoxy-transformers.

    PubMed

    Główka, Franciszek K; Romański, Michał; Teżyk, Artur; Żaba, Czesław

    2013-01-01

    Treosulfan (TREO) is an alkylating agent registered for treatment of advanced platin-resistant ovarian carcinoma. Nowadays, TREO is increasingly applied iv in high doses as a promising myeloablative agent with low organ toxicity in children. Under physiological conditions it undergoes pH-dependent transformation into epoxy-transformers (S,S-EBDM and S,S-DEB). The mechanism of this reaction is generally known, but not its kinetic details. In order to investigate kinetics of TREO transformation, HPLC method with refractometric detection for simultaneous determination of the three analytes in one analytical run has been developed for the first time. The samples containing TREO, S,S-EBDM, S,S-DEB and acetaminophen (internal standard) were directly injected onto the reversed phase column. To assure stability of the analytes and obtain their complete resolution, mobile phase composed of acetate buffer pH 4.5 and acetonitrile was applied. The linear range of the calibration curves of TREO, S,S-EBDM and S,S-DEB spanned concentrations of 20-6000, 34-8600 and 50-6000 μM, respectively. Intra- and interday precision and accuracy of the developed method fulfilled analytical criteria. The stability of the analytes in experimental samples was also established. The validated HPLC method was successfully applied to the investigation of the kinetics of TREO activation to S,S-EBDM and S,S-DEB. At pH 7.4 and 37 °C the transformation of TREO followed first-order kinetics with a half-life 1.5h. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  19. An Analytic Tool to Investigate the Effect of Binder on the Sensitivity of HMX-Based Plastic Bonded Explosives in the Skid Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, D. W.

    This project will develop an analytical tool to calculate performance of HMX based PBXs in the skid test. The skid-test is used as a means to measure sensitivity for large charges in handling situations. Each series of skid tests requires dozens of drops of large billets. It is proposed that the reaction (or lack of one) of PBXs in the skid test is governed by the mechanical properties of the binder. If true, one might be able to develop an analytical tool to estimate skid test behavior for new PBX formulations. Others over the past 50 years have tried tomore » develop similar models. This project will research and summarize the works of others and couple the work of 3 into an analytical tool that can be run on a PC to calculate drop height of HMX based PBXs. Detonation due to dropping a billet is argued to be a dynamic thermal event. To avoid detonation, the heat created due to friction at impact, must be conducted into the charge or the target faster than the chemical kinetics can create additional energy. The methodology will involve numerically solving the Frank-Kamenetskii equation in one dimension. The analytical problem needs to be bounded in terms of how much heat is introduced to the billet and for how long. Assuming an inelastic collision with no rebound, the billet will be in contact with the target for a short duration determined by the equations of motion. For the purposes of the calculations, it will be assumed that if a detonation is to occur, it will transpire within that time. The surface temperature will be raised according to the friction created using the equations of motion of dropping the billet on a rigid surface. The study will connect the works of Charles Anderson, Alan Randolph, Larry Hatler, Alfonse Popolato, and Charles Mader into a single PC based analytic tool. Anderson's equations of motion will be used to calculate the temperature rise upon impact, the time this temperature is maintained (contact time) will be obtained from the work of Hatler et. al., and the reactive temperature rise will be obtained from Mader's work. Finally, the assessment of when a detonation occurs will be derived from Bowden and Yoffe's thermal explosion theory (hot spot).« less

  20. Containment of composite fan blades

    NASA Technical Reports Server (NTRS)

    Stotler, C. L.; Coppa, A. P.

    1979-01-01

    A lightweight containment was developed for turbofan engine fan blades. Subscale ballistic-type tests were first run on a number of concepts. The most promising configuration was selected and further evaluated by larger scale tests in a rotating test rig. Weight savings made possible by the use of this new containment system were determined and extrapolated to a CF6-size engine. An analytical technique was also developed to predict the released blades motion when involved in the blade/casing interaction process. Initial checkout of this procedure was accomplished using several of the tests run during the program.

  1. An analytical procedure for evaluating shuttle abort staging aerodynamic characteristics

    NASA Technical Reports Server (NTRS)

    Meyer, R.

    1973-01-01

    An engineering analysis and computer code (AERSEP) for predicting Space Shuttle Orbiter - HO Tank longitudinal aerodynamic characteristics during abort separation has been developed. Computed results are applicable at Mach numbers above 2 for angle-of-attack between plus or minus 10 degrees. No practical restrictions on orbiter-tank relative positioning are indicated for tank-under-orbiter configurations. Input data requirements and computer running times are minimal facilitating program use for parametric studies, test planning, and trajectory analysis. In a majority of cases AERSEP Orbiter-Tank interference predictions are as accurate as state-of-the-art estimates for interference-free or isolated-vehicle configurations. AERSEP isolated-orbiter predictions also show excellent correlation with data.

  2. Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual

    NASA Technical Reports Server (NTRS)

    Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.

    1986-01-01

    The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

  3. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    PubMed

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  4. Fast and comprehensive analysis of secondary metabolites in cocoa products using ultra high-performance liquid chromatography directly after pressurized liquid extraction.

    PubMed

    Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F

    2016-08-01

    Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Fabrication of powdery polymer aerogel as the stationary phase for high-resolution gas chromatographic separation.

    PubMed

    Zheng, Juan; Lu, Cuiming; Huang, Junlong; Chen, Luyi; Ni, Chuyi; Xie, Xintong; Zhu, Fang; Wu, Dingcai; Ouyang, Gangfeng

    2018-08-15

    Novel powdery polymer aerogel (PPA) prepared via the (micro)emulsion polymerization and the following hyper crosslinking reaction was fabricated as stationary phase of capillary column for the first time. Due to its powdery morphology, unique 3D nano-network structure, high surface area and good thermostability, the PPA-coated capillary column demonstrated high-resolution chromatographic separation towards nonpolar and weakly polar organic compounds, including benzene series, n-alkanes, ketone mixtures and trichlorobenzenes. Moreover, the reproducibility, quantitative analysis ability and thermostability of PPA-coated capillary column were also evaluated. The relative standard deviations for three replicate determinations of selected analytes were 0.02-0.11%, 0.12-0.26% and 1.2-3.6% for run-to-run, day-to-day and column-to-column analyses, respectively. The PPA demonstrated good thermostability, and the PPA-coated capillary column was proved to be heat-resistant (270 °C). The results of this study show PPA is an excellent candidate to be employed as stationary phase for gas chromatography capillary. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.

    PubMed

    Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu

    2012-06-01

    In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.

  7. Quantification of tamoxifen and three of its phase-I metabolites in human plasma by liquid chromatography/triple-quadrupole mass spectrometry.

    PubMed

    Binkhorst, Lisette; Mathijssen, Ron H J; Ghobadi Moghaddam-Helmantel, Inge M; de Bruijn, Peter; van Gelder, Teun; Wiemer, Erik A C; Loos, Walter J

    2011-12-15

    In view of future pharmacokinetic studies, a highly sensitive ultra performance liquid chromatography/tandem mass spectrometry (UPLC-MS/MS) method has been developed for the simultaneous quantification of tamoxifen and three of its main phase I metabolites in human lithium heparinized plasma. The analytical method has been thoroughly validated in agreement with FDA recommendations. Plasma samples of 200 μl were purified by liquid-liquid extraction with 1 ml n-hexane/isopropanol, after deproteination through addition of 50 μl acetone and 50 μl deuterated internal standards in acetonitrile. Tamoxifen, N-desmethyl-tamoxifen, 4-hydroxy-tamoxifen and endoxifen were chromatographically separated on an Acquity UPLC(®) BEH C18 1.7 μm 2.1 mm×100 mm column eluted at a flow-rate of 0.300 ml/min on a gradient of 0.2mM ammonium formate and acetonitrile, both acidified with 0.1% formic acid. The overall run time of the method was 10 min, with elution times of 2.9, 3.0, 4.1 and 4.2 min for endoxifen, 4-hydroxy-tamoxifen, N-desmethyl-tamoxifen and tamoxifen, respectively. Tamoxifen and its metabolites were quantified by triple-quadrupole mass spectrometry in the positive ion electrospray ionization mode. The multiple reaction monitoring transitions were set at 372>72 (m/z) for tamoxifen, 358>58 (m/z) for N-desmethyl-tamoxifen, 388>72 (m/z) for 4-hydroxy-tamoxifen and 374>58 (m/z) for endoxifen. The analytical method was highly sensitive with the lower limit of quantification validated at 5.00 nM for tamoxifen and N-desmethyl-tamoxifen and 0.500 nM for 4-hydroxy-tamoxifen and endoxifen, which is equivalent to 1.86, 1.78, 0.194 and 0.187 ng/ml for tamoxifen, N-desmethyl-tamoxifen, 4-hydroxy-tamoxifen and endoxifen, respectively. The method was also precise and accurate, with within-run and between-run precisions within 12.0% and accuracy ranging from 89.5 to 105.3%. The method has been applied to samples from a clinical study and cross-validated with a validated LC-MS/MS method in serum. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.

    2014-01-01

    The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.

  9. Validated UHPLC-MS/MS assay for quantitative determination of etoposide, gemcitabine, vinorelbine and their metabolites in patients with lung cancer.

    PubMed

    Gong, Xiaobin; Yang, Le; Zhang, Feng; Liang, Youtian; Gao, Shouhong; Liu, Ke; Chen, Wansheng

    2017-11-01

    A fully valid UHPLC-MS/MS method was developed for the determination of etoposide, gemcitabine, vinorelbine and their metabolites (etoposide catechol, 2',2'-difluorodeoxyuridine and 4-O-deacetylvinorelbine) in human plasma. The multiple reaction monitoring mode was performed with an electrospray ionization interface operating in both the positive and negative ion modes per compound. The method required only 100 μL plasma with a one-step simple de-proteinization procedure, and a short run time of 7.5 min per sample. A Waters ACQUITY UPLC HSS T3 column (2.1 × 100 mm, 1.8 μm) provided chromatographic separation of analytes using a binary mobile phase gradient (A, 0.1% formic acid in acetonitrile, v/v; B, 0.1% formic acid in water, v/v). Linear coefficients of correlation were >0.995 for all analytes. The relative deviation of this method was <10% for intra- and inter-day assays and the accuracy ranged between 86.35% and 113.44%. The mean extraction recovery and matrix effect of all the analytes were 62.07-105.46% and 93.67-105.87%, respectively. This method was successfully applied to clinical samples from patients with lung cancer. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Liquid chromatography tandem mass spectrometry for the simultaneous determination of mequindox and its metabolites in porcine tissues.

    PubMed

    Zeng, Dongping; Shen, Xiangguang; He, Limin; Ding, Huanzhong; Tang, Youzhi; Sun, Yongxue; Fang, Binghu; Zeng, Zhenling

    2012-06-01

    A rapid liquid chromatography tandem mass spectrometric method was developed for the simultaneous determination of mequindox and its five metabolites (2-isoethanol mequindox, 2-isoethanol 1-desoxymequindox, 1-desoxymequindox, 1,4-bisdesoxymequindox, and 2-isoethanol bisdesoxymequindox) in porcine muscle, liver, and kidney, fulfilling confirmation criteria with two transitions for each compound with acceptable relative ion intensities. The method involved acid hydrolysis, purification by solid-phase extraction, and subsequent analysis with liquid chromatography tandem mass spectrometry using electrospray ionization operated in positive polarity with a total run time of 15 min. The decision limit values of five analytes in porcine tissues ranged from 0.6 to 2.9 μg/kg, and the detection capability values ranged from 1.2 to 5.7 μg/kg. The results of the inter-day study, which was performed by fortifying porcine muscle (2, 4, and 8 μg/kg), liver, and kidney (10, 20, and 40 μg/kg) samples on three separate days, showed that the accuracy of the method for the various analytes ranged between 75.3 and 107.2% with relative standard deviation less than 12% for each analyte. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. HPLC-ESI-MS/MS validated method for simultaneous quantification of zopiclone and its metabolites, N-desmethyl zopiclone and zopiclone-N-oxide in human plasma.

    PubMed

    Mistri, Hiren N; Jangid, Arvind G; Pudage, Ashutosh; Shrivastav, Pranav

    2008-03-15

    A simple, selective and sensitive isocratic HPLC method with triple quadrupole mass spectrometry detection has been developed and validated for simultaneous quantification of zopiclone and its metabolites in human plasma. The analytes were extracted using solid phase extraction, separated on Symmetry shield RP8 column (150 mm x 4.6 mm i.d., 3.5 microm particle size) and detected by tandem mass spectrometry with a turbo ion spray interface. Metaxalone was used as an internal standard. The method had a chromatographic run time of 4.5 min and linear calibration curves over the concentration range of 0.5-150 ng/mL for both zopiclone and N-desmethyl zopiclone and 1-150 ng/mL for zopiclone-N-oxide. The intra-batch and inter-batch accuracy and precision evaluated at lower limit of quantification and quality control levels were within 89.5-109.1% and 3.0-14.7%, respectively, for all the analytes. The recoveries calculated for the analytes and internal standard were > or = 90% from spiked plasma samples. The validated method was successfully employed for a comparative bioavailability study after oral administration of 7.5 mg zopiclone (test and reference) to 16 healthy volunteers under fasted condition.

  12. [The concept of the development of the state of chemical-analytical environmental monitoring].

    PubMed

    Rakhmanin, Iu A; Malysheva, A G

    2013-01-01

    Chemical and analytical monitoring of the quality of environment is based on the accounting of the trace amount of substances. Considering the multicomponent composition of the environment and running processes of transformation of substances in it, in determination of the danger of the exposure to the chemical pollution of environment on population health there is necessary evaluation based on the simultaneous account of complex of substances really contained in the environment and supplying from different sources. Therefore, in the analytical monitoring of the quality and safety of the environment there is a necessary conversion from the orientation, based on the investigation of specific target substances, to estimation of real complex of compounds.

  13. Performance-based, cost- and time-effective pcb analytical methodology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarado, J. S.

    1998-06-11

    Laboratory applications for the analysis of PCBs (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBs. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the newmore » sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval.« less

  14. Surrogate Reservoir Model

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Shahab

    2010-05-01

    Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.

  15. LABORATORY CAPACITY NEEDS ASSESSMENT OF DRINKING WATER UTILITIES: A GLOBAL PERSPECTIVE

    EPA Science Inventory

    Fully-functioning analytical laboratories capable of producing quality data are essential components of well-run drinking water utilities. In Europe and the US, drinking water laboratory performance is closely monitored and regulated; this is not always the case in the less indu...

  16. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  17. Internet-based interface for STRMDEPL08

    USGS Publications Warehouse

    Reeves, Howard W.; Asher, A. Jeremiah

    2010-01-01

    The core of the computer program STRMDEPL08 that estimates streamflow depletion by a pumping well with one of four analytical solutions was re-written in the Javascript software language and made available through an internet-based interface (web page). In the internet-based interface, the user enters data for one of the four analytical solutions, Glover and Balmer (1954), Hantush (1965), Hunt (1999), and Hunt (2003), and the solution is run for constant pumping for a desired number of simulation days. Results are returned in tabular form to the user. For intermittent pumping, the interface allows the user to request that the header information for an input file for the stand-alone executable STRMDEPL08 be created. The user would add the pumping information to this header information and run the STRMDEPL08 executable that is available for download through the U.S. Geological Survey. Results for the internet-based and stand-alone versions of STRMDEPL08 are shown to match.

  18. Multiplex Real-Time PCR Method for Simultaneous Identification and Toxigenic Type Characterization of Clostridium difficile From Stool Samples

    PubMed Central

    Alam, Mohammad J.; Tisdel, Naradah L.; Shah, Dhara N.; Yapar, Mehmet; Lasco, Todd M.; Garey, Kevin W.

    2015-01-01

    Background The aim of this study was to develop and validate a multiplex real-time PCR assay for simultaneous identification and toxigenic type characterization of Clostridium difficile. Methods The multiplex real-time PCR assay targeted and simultaneously detected triose phosphate isomerase (tpi) and binary toxin (cdtA) genes, and toxin A (tcdA) and B (tcdB) genes in the first and sec tubes, respectively. The results of multiplex real-time PCR were compared to those of the BD GeneOhm Cdiff assay, targeting the tcdB gene alone. The toxigenic culture was used as the reference, where toxin genes were detected by multiplex real-time PCR. Results A total of 351 stool samples from consecutive patients were included in the study. Fifty-five stool samples (15.6%) were determined to be positive for the presence of C. difficile by using multiplex real-time PCR. Of these, 48 (87.2%) were toxigenic (46 tcdA and tcdB-positive, two positive for only tcdB) and 11 (22.9%) were cdtA-positive. The sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) of the multiplex real-time PCR compared with the toxigenic culture were 95.6%, 98.6%, 91.6%, and 99.3%, respectively. The analytical sensitivity of the multiplex real-time PCR assay was determined to be 103colonyforming unit (CFU)/g spiked stool sample and 0.0625 pg genomic DNA from culture. Analytical specificity determined by using 15 enteric and non-clostridial reference strains was 100%. Conclusions The multiplex real-time PCR assay accurately detected C. difficile isolates from diarrheal stool samples and characterized its toxin genes in a single PCR run. PMID:25932438

  19. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  20. MSTor version 2013: A new version of the computer code for the multi-structural torsional anharmonicity, now with a coupled torsional potential

    NASA Astrophysics Data System (ADS)

    Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.

    2013-08-01

    We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.

  1. Simultaneous determination of antidementia drugs in human plasma: procedure transfer from HPLC-MS to UPLC-MS/MS.

    PubMed

    Noetzli, Muriel; Ansermot, Nicolas; Dobrinas, Maria; Eap, Chin B

    2012-05-01

    A previously developed high performance liquid chromatography mass spectrometry (HPLC-MS) procedure for the simultaneous determination of antidementia drugs, including donepezil, galantamine, memantine, rivastigmine and its metabolite NAP 226-90, was transferred to an ultra performance liquid chromatography system coupled to a tandem mass spectrometer (UPLC-MS/MS). The drugs and their internal standards ([(2)H(7)]-donepezil, [(13)C,(2)H(3)]-galantamine, [(13)C(2),(2)H(6)]-memantine, [(2)H(6)]-rivastigmine) were extracted from 250 μL human plasma by protein precipitation with acetonitrile. Chromatographic separation was achieved on a reverse phase column (BEH C18 2.1 mm × 50 mm; 1.7 μm) with a gradient elution of an ammonium acetate buffer at pH 9.3 and acetonitrile at a flow rate of 0.4 mL/min and an overall run time of 4.5 min. The analytes were detected on a tandem quadrupole mass spectrometer operated in positive electrospray ionization mode, and quantification was performed using multiple reaction monitoring. The method was validated according to the recommendations of international guidelines over a calibration range of 1-300 ng/mL for donepezil, galantamine and memantine, and 0.2-50 ng/mL for rivastimgine and NAP 226-90. The trueness (86-108%), repeatability (0.8-8.3%), intermediate precision (2.3-10.9%) and selectivity of the method were found to be satisfactory. Matrix effects variability was inferior to 15% for the analytes and inferior to 5% after correction by internal standards. A method comparison was performed with patients' samples showing similar results between the HPLC-MS and UPLC-MS/MS procedures. Thus, this validated UPLC-MS/MS method allows to reduce the required amount of plasma, to use a simplified sample preparation, and to obtain a higher sensitivity and specificity with a much shortened run-time. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  3. Street-running LRT may not affect a neighbour's sleep

    NASA Astrophysics Data System (ADS)

    Sarkar, S. K.; Wang, J.-N.

    2003-10-01

    A comprehensive dynamic finite difference model and analysis was conducted simulating LRT running at the speed of 24 km/h on a city street. The analysis predicted ground borne vibration (GBV) to remain at or below the FTA criterion of a RMS velocity of 72 VdB (0.004 in/s) at the nearest residence. In the model, site-specific stratography and dynamic soil and rock properties were used that were determined from in situ testing. The dynamic input load from LRT vehicle running at 24 km/h was computed from actual measured data from Portland, Oregon's West Side LRT project, which used a low floor vehicle similar to the one proposed for the NJ Transit project. During initial trial runs of the LRT system, vibration and noise measurements were taken at three street locations while the vehicles were running at about the 20-24 km/h operating speed. The measurements confirmed the predictions and satisfied FTA criteria for noise and vibration for frequent events. This paper presents the analytical model, GBV predictions, site measurement data and comparison with FTA criterion.

  4. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.

  5. 10 CFR 26.5 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...

  6. 10 CFR 26.5 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...

  7. 10 CFR 26.5 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...

  8. 10 CFR 26.5 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...

  9. 10 CFR 26.5 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...

  10. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  11. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  12. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  13. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  14. 40 CFR 799.9410 - TSCA chronic toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...

  15. 40 CFR 86.537-90 - Dynamometer test runs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  16. 40 CFR 86.537-90 - Dynamometer test runs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  17. 40 CFR 86.537-90 - Dynamometer test runs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...

  18. Semi-analytical model of the axial movements of an oil-well drillstring in vertical wellbores

    NASA Astrophysics Data System (ADS)

    Hovda, Sigve

    2018-03-01

    A lumped element model for the axial movement of an oil-well drillstring is presented. In this paper, the model is restricted to vertical holes, where damping is due to skin friction from time dependent Newtonian annular Couette-Poiseuille flow. The drillstring is constructed of pipes with different diameters and the diameter of the hole varies as a function of depth. Under these assumptions, the axial movement anywhere in the drillstring is basically a convolution between the axial movement on the top and a semi-analytical function that is derived in this paper. Expressions are given for transfer functions for downhole movements and pressures (surge and swab). In a vertical drilling situation, the motion is clearly underdamped, even when the hole is tight. The semi-analytical model illuminates various factors that are shown to be important for describing downhole pressure and motion. In particular the effect of added mass, the steady state viscous forces, the Basset viscous forces and the distribution of pipe sizes in the hole. The latter have non-neglectable impacts on where the resonant frequencies are located, how much they are amplified and what happens to the downhole pressure. Together with statistical power spectra of ocean wave patterns and the response amplitude operators for a floating structure, this model illustrates design concerns related to heave motion and how fast one can run the drillstring into the hole. Moreover, because of the computational simplicity of computing the convolution, the model is well suited for a real-time implementation.

  19. Modelling Ocean Dissipation in Icy Satellites: A Comparison of Linear and Quadratic Friction

    NASA Astrophysics Data System (ADS)

    Hay, H.; Matsuyama, I.

    2015-12-01

    Although subsurface oceans are confirmed in Europa, Ganymede, Callisto, and strongly suspected in Enceladus and Titan, the exact mechanism required to heat and maintain these liquid reservoirs over Solar System history remains a mystery. Radiogenic heating can supply enough energy for large satellites whereas tidal dissipation provides the best explanation for the presence of oceans in small icy satellites. The amount of thermal energy actually contributed to the interiors of these icy satellites through oceanic tidal dissipation is largely unquantified. Presented here is a numerical model that builds upon previous work for quantifying tidally dissipated energy in the subsurface oceans of the icy satellites. Recent semi-analytical models (Tyler, 2008 and Matsuyama, 2014) have solved the Laplace Tidal Equations to estimate the time averaged energy flux over an orbital period in icy satellite oceans, neglecting the presence of a solid icy shell. These models are only able to consider linear Rayleigh friction. The numerical model presented here is compared to one of these semi-analytical models, finding excellent agreement between velocity and displacement solutions for all three terms to the tidal potential. Time averaged energy flux is within 2-6% of the analytical values. Quadratic (bottom) friction is then incorporated into the model, replacing linear friction. This approach is commonly applied to terrestrial ocean dissipation studies where dissipation scales nonlinearly with velocity. A suite of simulations are also run for the quadratic friction case which are then compared to and analysed against recent scaling laws developed by Chen and Nimmo (2013).

  20. Simultaneous determination of nitroimidazoles, benzimidazoles, and chloramphenicol components in bovine milk by ultra-high performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Wang, Yuanyuan; Li, Xiaowei; Zhang, Zhiwen; Ding, Shuangyang; Jiang, Haiyang; Li, Jiancheng; Shen, Jianzhong; Xia, Xi

    2016-02-01

    A sensitive, confirmatory ultra-high performance liquid chromatography-tandem mass spectrometric method was developed and validated to detect 23 veterinary drugs and metabolites (nitroimidazoles, benzimidazoles, and chloramphenicol components) in bovine milk. Compounds of interest were sequentially extracted from milk with acetonitrile and basified acetonitrile using sodium chloride to induce liquid-liquid partition. The extract was purified on a mixed mode solid-phase extraction cartridge. Using rapid polarity switching in electrospray ionization, a single injection was capable of detecting both positively and negatively charged analytes in a 9 min chromatography run time. Recoveries based on matrix-matched calibrations and isotope labeled internal standards for milk ranged from 51.7% to 101.8%. The detection limits and quantitation limits of the analytical method were found to be within the range of 2-20 ng/kg and 5-50 ng/kg, respectively. The recommended method is simple, specific, and reliable for the routine monitoring of nitroimidazoles, benzimidazoles, and chloramphenicol components in bovine milk samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Comparison of Tension and Compression Creep in a Polymeric Composite and the Effects of Physical Aging on Creep Behavior

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Veazie, David R.; Brinson, L. Catherine

    1996-01-01

    Experimental and analytical methods were used to investigate the similarities and differences of the effects of physical aging on creep compliance of IM7/K3B composite loaded in tension and compression. Two matrix dominated loading modes, shear and transverse, were investigated for two load cases, tension and compression. The tests, run over a range of sub-glass transition temperatures, provided material constants, material master curves and aging related parameters. Comparing results from the short-term data indicated that although trends in the data with respect to aging time and aging temperature are similar, differences exist due to load direction and mode. The analytical model used for predicting long-term behavior using short-term data as input worked equally as well for the tension or compression loaded cases. Comparison of the loading modes indicated that the predictive model provided more accurate long term predictions for the shear mode as compared to the transverse mode. Parametric studies showed the usefulness of the predictive model as a tool for investigating long-term performance and compliance acceleration due to temperature.

  2. Applications of flight control system methods to an advanced combat rotorcraft

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.; Fletcher, Jay W.; Morris, Patrick M.; Tucker, George T.

    1989-01-01

    Advanced flight control system design, analysis, and testing methodologies developed at the Ames Research Center are applied in an analytical and flight test evaluation of the Advanced Digital Optical Control System (ADOCS) demonstrator. The primary objectives are to describe the knowledge gained about the implications of digital flight control system design for rotorcraft, and to illustrate the analysis of the resulting handling-qualities in the context of the proposed new handling-qualities specification for rotorcraft. Topics covered in-depth are digital flight control design and analysis methods, flight testing techniques, ADOCS handling-qualities evaluation results, and correlation of flight test results with analytical models and the proposed handling-qualities specification. The evaluation of the ADOCS demonstrator indicates desirable response characteristics based on equivalent damping and frequency, but undersirably large effective time-delays (exceeding 240 m sec in all axes). Piloted handling-qualities are found to be desirable or adequate for all low, medium, and high pilot gain tasks; but handling-qualities are inadequate for ultra-high gain tasks such as slope and running landings.

  3. Advances in analytical methodologies to guide bioprocess engineering for bio-therapeutics.

    PubMed

    Saldova, Radka; Kilcoyne, Michelle; Stöckmann, Henning; Millán Martín, Silvia; Lewis, Amanda M; Tuite, Catherine M E; Gerlach, Jared Q; Le Berre, Marie; Borys, Michael C; Li, Zheng Jian; Abu-Absi, Nicholas R; Leister, Kirk; Joshi, Lokesh; Rudd, Pauline M

    2017-03-01

    This study was performed to monitor the glycoform distribution of a recombinant antibody fusion protein expressed in CHO cells over the course of fed-batch bioreactor runs using high-throughput methods to accurately determine the glycosylation status of the cell culture and its product. Three different bioreactors running similar conditions were analysed at the same five time-points using the advanced methods described here. N-glycans from cell and secreted glycoproteins from CHO cells were analysed by HILIC-UPLC and MS, and the total glycosylation (both N- and O-linked glycans) secreted from the CHO cells were analysed by lectin microarrays. Cell glycoproteins contained mostly high mannose type N-linked glycans with some complex glycans; sialic acid was α-(2,3)-linked, galactose β-(1,4)-linked, with core fucose. Glycans attached to secreted glycoproteins were mostly complex with sialic acid α-(2,3)-linked, galactose β-(1,4)-linked, with mostly core fucose. There were no significant differences noted among the bioreactors in either the cell pellets or supernatants using the HILIC-UPLC method and only minor differences at the early time-points of days 1 and 3 by the lectin microarray method. In comparing different time-points, significant decreases in sialylation and branching with time were observed for glycans attached to both cell and secreted glycoproteins. Additionally, there was a significant decrease over time in high mannose type N-glycans from the cell glycoproteins. A combination of the complementary methods HILIC-UPLC and lectin microarrays could provide a powerful and rapid HTP profiling tool capable of yielding qualitative and quantitative data for a defined biopharmaceutical process, which would allow valuable near 'real-time' monitoring of the biopharmaceutical product. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. New software solutions for analytical spectroscopists

    NASA Astrophysics Data System (ADS)

    Davies, Antony N.

    1999-05-01

    Analytical spectroscopists must be computer literate to effectively carry out the tasks assigned to them. This has often been resisted within organizations with insufficient funds to equip their staff properly, a lack of desire to deliver the essential training and a basic resistance amongst staff to learn the new techniques required for computer assisted analysis. In the past these problems were compounded by seriously flawed software which was being sold for spectroscopic applications. Owing to the limited market for such complex products the analytical spectroscopist often was faced with buying incomplete and unstable tools if the price was to remain reasonable. Long product lead times meant spectrometer manufacturers often ended up offering systems running under outdated and sometimes obscure operating systems. Not only did this mean special staff training for each instrument where the knowledge gained on one system could not be transferred to the neighbouring system but these spectrometers were often only capable of running in a stand-alone mode, cut-off from the rest of the laboratory environment. Fortunately a number of developments in recent years have substantially changed this depressing picture. A true multi-tasking operating system with a simple graphical user interface, Microsoft Windows NT4, has now been widely introduced into the spectroscopic computing environment which has provided a desktop operating system which has proved to be more stable and robust as well as requiring better programming techniques of software vendors. The opening up of the Internet has provided an easy way to access new tools for data handling and has forced a substantial re-think about results delivery (for example Chemical MIME types, IUPAC spectroscopic data exchange standards). Improved computing power and cheaper hardware now allows large spectroscopic data sets to be handled without too many problems. This includes the ability to carry out chemometric operations in minutes rather than hours. Fast networks now enable data analysis of even multi-dimensional spectroscopic data sets remote from the measuring instrument. A strong tendency to opt for a more unified graphical user interface which is substantially more user friendly allows even inexperienced users to rapidly get acquainted with even the complex mathematical analyses. Some examples of new spectroscopic software products will be given to demonstrate the aforesaid points and highlight the ease of integration into a modern analytical spectroscopy workplace.

  5. Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes

    NASA Astrophysics Data System (ADS)

    Goncharov, K. A.; Denisov, I. A.

    2018-03-01

    The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.

  6. Evaluation of a handheld point-of-care analyser for measurement of creatinine in cats.

    PubMed

    Reeve, Jenny; Warman, Sheena; Lewis, Daniel; Watson, Natalie; Papasouliotis, Kostas

    2017-02-01

    Objectives The aim of the study was to evaluate whether a handheld creatinine analyser (StatSensor Xpress; SSXp), available for human patients, can be used to measure creatinine reliably in cats. Methods Analytical performance was evaluated by determining within- and between-run coefficient of variation (CV, %), total error observed (TE obs , %) and sigma metrics. Fifty client-owned cats presenting for investigation of clinical disease had creatinine measured simultaneously, using SSXp (whole blood and plasma) and a reference instrument (Konelab, serum); 48 paired samples were included in the study. Creatinine correlation between methodologies (SSXp vs Konelab) and sample types (SSXp whole blood vs SSXp plasma ) was assessed by Spearman's correlation coefficient and agreement was determined using Bland-Altman difference plots. Each creatinine value was assigned an IRIS stage (1-4); correlation and agreement between Konelab and SSXp IRIS stages were evaluated. Results Within-run CV (4.23-8.85%), between-run CV (8.95-11.72%), TE obs (22.15-34.92%) and sigma metrics (⩽3) did not meet desired analytical requirements. Correlation between sample types was high (SSXp whole blood vs SSXp plasma ; r = 0.89), and between instruments was high (SSXp whole blood vs Konelab serum ; r = 0.85) to very high (SSXp plasma vs Konelab serum ; r = 0.91). Konelab and SSXp whole blood IRIS scores exhibited high correlation ( r = 0.76). Packed cell volume did not significantly affect SSXp determination of creatinine. Bland-Altman difference plots identified a positive bias for the SSXp (7.13 μmol/l SSXp whole blood ; 20.23 μmol/l SSXp plasma ) compared with the Konelab. Outliers (1/48 whole blood; 2/48 plasma) occurred exclusively at very high creatinine concentrations. The SSXp failed to identify 2/21 azotaemic cats. Conclusions and relevance Analytical performance of the SSXp in feline patients is not considered acceptable. The SSXp exhibited a high to very high correlation compared with the reference methodology but the two instruments cannot be used interchangeably. Improvements in the SSXp analytical performance are needed before its use can be recommended in feline clinical practice.

  7. Rapid detection of G6PD mutations by multicolor melting curve analysis.

    PubMed

    Xia, Zhongmin; Chen, Ping; Tang, Ning; Yan, Tizhen; Zhou, Yuqiu; Xiao, Qizhi; Huang, Qiuying; Li, Qingge

    2016-09-01

    The MeltPro G6PD assay is the first commercial genetic test for glucose-6-phosphate dehydrogenase (G6PD) deficiency. This multicolor melting curve analysis-based real-time PCR assay is designed to genotype 16 G6PD mutations prevalent in the Chinese population. We comprehensively evaluated both the analytical and clinical performances of this assay. All 16 mutations were accurately genotyped, and the standard deviation of the measured Tm was <0.3°C. The limit of detection was 1.0ng/μL human genomic DNA. The assay could be run on four mainstream models of real-time PCR machines. The shortest running time (150min) was obtained with LightCycler 480 II. A clinical study using 763 samples collected from three hospitals indicated that, of 433 samples with reduced G6PD activity, the MeltPro assay identified 423 samples as mutant, yielding a clinical sensitivity of 97.7% (423/433). Of the 117 male samples with normal G6PD activity, the MeltPro assay confirmed that 116 samples were wild type, yielding a clinical specificity of 99.1% (116/117). Moreover, the MeltPro assay demonstrated 100% concordance with DNA sequencing for all targeted mutations. We concluded that the MeltPro G6PD assay is useful as a diagnostic or screening tool for G6PD deficiency in clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Parametric Study of Carbon Nanotube Production by Laser Ablation Process

    NASA Technical Reports Server (NTRS)

    Arepalli, Sivaram; Nikolaev, Pavel; Holmes, William; Hadjiev, Victor; Scott, Carl

    2002-01-01

    Carbon nanotubes form a new class of nanomaterials that are presumed to have extraordinary mechanical, electrical and thermal properties. The single wall nanotubes (SWNTs) are estimated to be 100 times stronger than steel with 1/6th the weight; electrical carrying capacity better than copper and thermal conductivity better than diamond. Applications of these SWNTs include possible weight reduction of aerospace structures, multifunctional materials, nanosensors and nanoelectronics. Double pulsed laser vaporization process produces SWNTs with the highest percentage of nanotubes in the output material. The normal operating conditions include a green laser pulse closely followed by an infrared laser pulse. Lasers ab late a metal-containing graphite target located in a flow tube maintained in an oven at 1473K with argon flow of 100 sccm at a 500 Torr pressure. In the present work a number of production runs were carried out, changing one operating condition at a time. We have studied the effects of nine parameters, including the sequencing of the laser pulses, pulse separation times, laser energy densities, the type of buffer gas used, oven temperature, operating pressure, flow rate and inner flow tube diameters. All runs were done using the same graphite target. The collected nanotube material was characterized by a variety of analytical techniques including scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman and thermo gravimetric analysis (TGA). Results indicate trends that could be used to optimize the process and increase the efficiency of the production process.

  9. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    PubMed

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  10. Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk

    PubMed Central

    Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581

  11. Determination of cocaine in postmortem human liver exposed to overdose. Application of an innovative and efficient extraction/clean up procedure and gas chromatography-mass spectrometry analysis.

    PubMed

    Magalhães, Elisângela Jaqueline; Ribeiro de Queiroz, Maria Eliana Lopes; Penido, Marcus Luiz de Oliveira; Paiva, Marco Antônio Ribeiro; Teodoro, Janaína Aparecida Reis; Augusti, Rodinei; Nascentes, Clésia Cristina

    2013-09-27

    A simple and efficient method was developed for the determination of cocaine in post-mortem samples of human liver via solid-liquid extraction with low temperature partitioning (SLE-LTP) and analysis by gas chromatography coupled to mass spectrometry (GC-MS). The extraction procedure was optimized by evaluating the influence of the following variables: pH of the extract, volume and composition of the extractor solvent, addition of a sorbent material (PSA: primary-secondary amine) and NaCl to clean up and increase the ionic strength of the extract. A bovine liver sample that was free of cocaine was used as a blank for the optimization of the SLE-LTP extraction procedure. The highest recovery was obtained when crushed bovine liver (2g) was treated with 2mL of ultrapure water plus 8mL of acetonitrile at physiological pH (7.4). The results also indicated no need for using PSA and NaCl. The complete analytical procedure was validated for the following figures of merit: selectivity, lower limit of quantification (LLOQ), calibration curve, recovery, precision and accuracy (for within-run and between-run experiments), matrix effect, dilution integrity and stability. The within-run and between-run precision (at four levels) varied from 2.1% to 9.4% and from 4.0% to 17.0%, respectively. A maximum deviation of 11.62% for the within-run and between-run accuracies in relation to the nominal concentrations was observed. Moreover, the LLOQ value for cocaine was 50.0ngg(-1) whereas no significant effects were noticed in the assays of dilution integrity and stability. To assess its overall performance, the optimized method was applied to the analysis of eight human liver samples collected from individuals who died due to the abusive consumption of cocaine. Due to the existence of a significant matrix effect, a blank human liver was used to construct a matrix-matched analytical curve. The concentrations of cocaine found in these samples ranged from 333.5 to 5969ngg(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Development of Fully Automated Low-Cost Immunoassay System for Research Applications.

    PubMed

    Wang, Guochun; Das, Champak; Ledden, Bradley; Sun, Qian; Nguyen, Chien

    2017-10-01

    Enzyme-linked immunosorbent assay (ELISA) automation for routine operation in a small research environment would be very attractive. A portable fully automated low-cost immunoassay system was designed, developed, and evaluated with several protein analytes. It features disposable capillary columns as the reaction sites and uses real-time calibration for improved accuracy. It reduces the overall assay time to less than 75 min with the ability of easy adaptation of new testing targets. The running cost is extremely low due to the nature of automation, as well as reduced material requirements. Details about system configuration, components selection, disposable fabrication, system assembly, and operation are reported. The performance of the system was initially established with a rabbit immunoglobulin G (IgG) assay, and an example of assay adaptation with an interleukin 6 (IL6) assay is shown. This system is ideal for research use, but could work for broader testing applications with further optimization.

  13. Sustainability and optimal control of an exploited prey predator system through provision of alternative food to predator.

    PubMed

    Kar, T K; Ghosh, Bapan

    2012-08-01

    In the present paper, we develop a simple two species prey-predator model in which the predator is partially coupled with alternative prey. The aim is to study the consequences of providing additional food to the predator as well as the effects of harvesting efforts applied to both the species. It is observed that the provision of alternative food to predator is not always beneficial to the system. A complete picture of the long run dynamics of the system is discussed based on the effort pair as control parameters. Optimal augmentations of prey and predator biomass at final time have been investigated by optimal control theory. Also the short and large time effects of the application of optimal control have been discussed. Finally, some numerical illustrations are given to verify our analytical results with the help of different sets of parameters. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Carbon catalyzed SO2 oxidation by NO2 and O3

    NASA Technical Reports Server (NTRS)

    Cofer, W. R., III; Schryer, D. R.; Rogowski, R. S.

    1982-01-01

    The oxidation of SO2 to sulfate on carbon particles by trace quantities of NO2 and O3 was studied. Particulate carbon black was either: (1) directly exposed on the pan of a microbalance to various humidified mixtures of SO2 and oxidant gas and the resultant weight gains monitored, or (2) the gas mixtures were bubbled through aqueous suspensions of carbon black and pure water blanks. In each set of experiments the run times were varied appropriately and the yields of sulfate were determined analytically. Conversion of SO2 to sulfate was thus characterized as a function of exposure time and of oxidant gas. Carbon black was determined to be an excellent catalyst for SO2 oxidation to sulfate by both NO2 and O3. No saturation effects were observed in either experimental approach. Conversions of SO2 to sulfate did not appear pH dependent.

  15. A rapid method for the simultaneous quantification of the major tocopherols, carotenoids, free and esterified sterols in canola (Brassica napus) oil using normal phase liquid chromatography.

    PubMed

    Flakelar, Clare L; Prenzler, Paul D; Luckett, David J; Howitt, Julia A; Doran, Gregory

    2017-01-01

    A normal phase high performance liquid chromatography (HPLC) method was developed to simultaneously quantify several prominent bioactive compounds in canola oil vis. α-tocopherol, γ-tocopherol, δ-tocopherol, β-carotene, lutein, β-sitosterol, campesterol and brassicasterol. The use of sequential diode array detection (DAD) and tandem mass spectrometry (MS/MS) allowed direct injection of oils, diluted in hexane without derivatisation or saponification, greatly reducing sample preparation time, and permitting the quantification of both free sterols and intact sterol esters. Further advantages over existing methods included increased analytical selectivity, and a chromatographic run time substantially less than other reported normal phase methods. The HPLC-DAD-MS/MS method was applied to freshly extracted canola oil samples as well as commercially available canola, palm fruit, sunflower and olive oils. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems. Ph.D. Thesis, 1992

    NASA Technical Reports Server (NTRS)

    Long, Junsheng

    1994-01-01

    This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.

  17. Formulation of the relativistic moment implicit particle-in-cell method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noguchi, Koichi; Tronci, Cesare; Zuccaro, Gianluca

    2007-04-15

    A new formulation is presented for the implicit moment method applied to the time-dependent relativistic Vlasov-Maxwell system. The new approach is based on a specific formulation of the implicit moment method that allows us to retain the same formalism that is valid in the classical case despite the formidable complication introduced by the nonlinear nature of the relativistic equations of motion. To demonstrate the validity of the new formulation, an implicit finite difference algorithm is developed to solve the Maxwell's equations and equations of motion. A number of benchmark problems are run: two stream instability, ion acoustic wave damping, Weibelmore » instability, and Poynting flux acceleration. The numerical results are all in agreement with analytical solutions.« less

  18. You can run, you can hide: The epidemiology and statistical mechanics of zombies

    NASA Astrophysics Data System (ADS)

    Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.

    2015-11-01

    We use a popular fictional disease, zombies, in order to introduce techniques used in modern epidemiology modeling, and ideas and techniques used in the numerical study of critical phenomena. We consider variants of zombie models, from fully connected continuous time dynamics to a full scale exact stochastic dynamic simulation of a zombie outbreak on the continental United States. Along the way, we offer a closed form analytical expression for the fully connected differential equation, and demonstrate that the single person per site two dimensional square lattice version of zombies lies in the percolation universality class. We end with a quantitative study of the full scale US outbreak, including the average susceptibility of different geographical regions.

  19. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  20. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  1. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  2. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  3. Validated HPLC method for determination of sennosides A and B in senna tablets.

    PubMed

    Sun, Shao Wen; Su, Hsiu Ting

    2002-07-31

    This study developed an efficient and reliable ion-pair liquid chromatographic method for quantitation of sennosides A and B in commercial senna tablets. Separation was conducted on a Hypersil C 18 column (250 x 4.6 mm, 5 microm) at a temperature of 40 degrees C, using a mixture of 0.1 M acetate buffer (pH 6.0) and acetonitrile (70:30, v/v) containing 5 mM tetrahexylammonium bromide as mobile phase. Sennosides A and B were completely separated from other constituents within 14 min. The developed method was validated. Both run-to-run repeatability (n=10) and day-to-day reproducibility (n=3) of peak area were below 0.4% RSD. Linearity of peak area was tested in the range 30-70 microg/ml (r>0.9997). Accuracy was assessed with recovery and the recoveries for sennosides A and B were 101.73+/-1.30% and 101.81+/-2.18% (n=3 x 6), respectively. Robustness of the analytical method was tested using a three-leveled Plackett-Burman design in which 11 factors were assessed with 23 experiments. Eight factors (column, concentration of ion pair reagent, % of organic modifier (acetonitrile), buffer pH, column temperature, flow rate, time constant and detection wavelength) were investigated in a specified range above and below the nominal method conditions. It was found that: (1) column and % acetonitrile affected significantly resolution and retention time, (2) column, % acetonitrile, column temperature, flow rate and time constant affected significantly the plate number of sennoside A, and (3) column and time constant affected significantly the tailing factor.

  4. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  5. 40 CFR Appendix B to Part 60 - Performance Specifications

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 6216-98 is the reference for design specifications, manufacturer's performance specifications, and test... representative of a group of monitors produced during a specified period or lot, for conformance with the design... technique and a single analytical program are used. One Run may include results for more than one test...

  6. Run-D.M.C.: A Mnemonic Aid for Explaining Mass Transfer in Electrochemical Systems

    ERIC Educational Resources Information Center

    Miles, Deon T.

    2013-01-01

    Electrochemistry is a significant area of analytical chemistry encompassing electrical measurements of chemical systems. The applications associated with electrochemistry appear in many aspects of everyday life: explaining how batteries work, how the human nervous system functions, and how metal corrosion occurs. The most common electrochemical…

  7. 40 CFR Table 2 to Subpart Llll of... - Emission Limits and Standards for New Multiple Hearth Sewage Sludge Incineration Units

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...

  8. 40 CFR Table 2 to Subpart Llll of... - Emission Limits and Standards for New Multiple Hearth Sewage Sludge Incineration Units

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...

  9. Managing Offshore Branch Campuses: An Analytical Framework for Institutional Strategies

    ERIC Educational Resources Information Center

    Shams, Farshid; Huisman, Jeroen

    2012-01-01

    The aim of this article is to develop a framework that encapsulates the key managerial complexities of running offshore branch campuses. In the transnational higher education (TNHE) literature, several managerial ramifications and impediments have been addressed by scholars and practitioners. However, the strands of the literature are highly…

  10. Women Match Men when Learning a Spatial Skill

    ERIC Educational Resources Information Center

    Spence, Ian; Yu, Jingjie Jessica; Feng, Jing; Marshman, Jeff

    2009-01-01

    Meta-analytic studies have concluded that although training improves spatial cognition in both sexes, the male advantage generally persists. However, because some studies run counter to this pattern, a closer examination of the anomaly is warranted. The authors investigated the acquisition of a basic skill (spatial selective attention) using a…

  11. A Performance-Based Method of Student Evaluation

    ERIC Educational Resources Information Center

    Nelson, G. E.; And Others

    1976-01-01

    The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)

  12. A Characteristics Approach to the Evaluation of Economics Software Packages.

    ERIC Educational Resources Information Center

    Lumsden, Keith; Scott, Alex

    1988-01-01

    Utilizes Bloom's Taxonomy to identify elements of teacher and student interest. Depicts the way in which these interests are developed into characteristics for use in analytically evaluating software. Illustrates the use of this evaluating technique by appraising the much used software package "Running the British Economy." (KO)

  13. Cash on Demand: A Framework for Managing a Cash Liquidity Position.

    ERIC Educational Resources Information Center

    Augustine, John H.

    1995-01-01

    A well-run college or university will seek to accumulate and maintain an appropriate cash reserve or liquidity position. A rigorous analytic process for estimating the size and cost of a liquidity position, based on judgments about the institution's operating risks and opportunities, is outlined. (MSE)

  14. Visualising Disability in the Past

    ERIC Educational Resources Information Center

    Devlieger, Patrick; Grosvenor, Ian; Simon, Frank; Van Hove, Geert; Vanobbergen, Bruno

    2008-01-01

    In recent years there has been a growth in interdisciplinary work which has argued that disability is not an isolated, individual medical pathology but instead a key defining social category like "race", class and gender. Seen in this way disability provides researchers with another analytic tool for exploring the nature of power. Running almost…

  15. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  16. The baseline serum value of α-amylase is a significant predictor of distance running performance.

    PubMed

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Tarperi, Cantor; La Torre, Antonio; Guidi, Gian Cesare; Schena, Federico

    2015-02-01

    This study was planned to investigate whether serum α-amylase concentration may be associated with running performance, physiological characteristics and other clinical chemistry analytes in a large sample of recreational athletes undergoing distance running. Forty-three amateur runners successfully concluded a 21.1 km half-marathon at 75%-85% of their maximal oxygen uptake (VO2max). Blood was drawn during warm up and 15 min after conclusion of the run. After correction for body weight change, significant post-run increases were observed for serum values of alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, bilirubin, creatine kinase (CK), iron, lactate dehydrogenase (LDH), triglycerides, urea and uric acid, whereas the values of body weight, glomerular filtration rate, total and low density lipoprotein-cholesterol were significantly decreased. The concentration of serum α-amylase was unchanged. In univariate analysis, significant associations with running performance were found for gender, VO2max, training regimen and pre-run serum values of α-amylase, CK, glucose, high density lipoprotein-cholesterol, LDH, urea and uric acid. In multivariate analysis, only VO2max (p=0.042) and baseline α-amylase (p=0.021) remained significant predictors of running performance. The combination of these two variables predicted 71% of variance in running performance. The baseline concentration of serum α-amylase was positively correlated with variation of serum glucose during the trial (r=0.345; p=0.025) and negatively with capillary blood lactate at the end of the run (r=-0.352; p=0.021). We showed that the baseline serum α-amylase concentration significantly and independently predicts distance running performance in recreational runners.

  17. CE-UV for the characterization of passion fruit juices provenance by amino acids profile with the aid of chemometric tools.

    PubMed

    Passos, Heloisa Moretti; Cieslarova, Zuzana; Simionato, Ana Valéria Colnaghi

    2016-07-01

    A separation method was developed in order to quantify free amino acids in passion fruit juices using CE-UV. A selective derivatization reaction with FMOC followed by MEKC analysis was chosen due to the highly interconnected mobilities of the analytes, enabling the separation of 22 amino acids by lipophilicity differences, as will be further discussed. To achieve such results, the method was optimized concerning BGE composition (concentrations, pH, and addition of organic modifier) and running conditions (temperature and applied voltage). The optimized running conditions were: a BGE composed by 60 mmol/L borate buffer at pH 10.1, 30 mmol/L SDS and 5 % methanol; running for 40 min at 23°C and 25 kV. The method was validated and applied on eight brands plus one fresh natural juice, detecting 12 amino acids. Quantification of six analytes combined with principal component analysis was capable to characterize different types of juices and showed potential to detect adulteration on industrial juices. Glutamic acid was found to be the most concentrated amino acid in all juices, exceeding 1 g/L in all samples and was also crucial for the correct classification of a natural juice, which presented a concentration of 22 g/L. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Determination of total and polycyclic aromatic hydrocarbons in aviation jet fuel.

    PubMed

    Bernabei, M; Reda, R; Galiero, R; Bocchinfuso, G

    2003-01-24

    The aviation jet fuel widely used in turbine engine aircraft is manufactured from straight-run kerosene. The combustion quality of jet fuel is largely related to the hydrocarbon composition of the fuel itself; paraffins have better burning properties than aromatic compounds, especially naphthalenes and light polycyclic aromatic hydrocarbons (PAHs), which are characterised as soot and smoke producers. For this reason the burning quality of fuel is generally measured as smoke fermation. This evaluation is carried out with UV spectrophotometric determination of total naphthalene hydrocarbons and a chromatographic analysis to determine the total aromatic compounds. These methods can be considered insufficient to evaluate the human health impact of these compounds due to their inability to measure trace (ppm) amounts of each aromatic hyrcarbon and each PAH in accordance with limitations imposed because of their toxicological properties. In this paper two analytical methods are presented. Both are based on a gas chromatographic technique with a mass detector operating in be selected ion monitoring mode. The first method was able to determine more than 60 aromatic hydrocarbons in a fuel sample in a 35-min chromatographic run, while the second was able to carry out the analysis of more than 30 PAHs in a 40-min chromatographic run. The linearity and sensitivity of the methods in measuring these analytes at trace levels are described.

  19. Hydro-economic performances of streamflow withdrawal strategies: the case of small run-of-river power plants

    NASA Astrophysics Data System (ADS)

    Basso, Stefano; Lazzaro, Gianluca; Schirmer, Mario; Botter, Gianluca

    2014-05-01

    River flows withdrawals to supply small run-of-river hydropower plants have been increasing significantly in recent years - particularly in the Alpine area - as a consequence of public incentives aimed at enhancing energy production from renewable sources. This growth further raised the anthropic pressure in areas traditionally characterized by an intense exploitation of water resources, thereby triggering social conflicts among local communities, hydropower investors and public authorities. This brought to the attention of scientists and population the urgency for novel and quantitative tools for assessing the hydrologic impact of these type of plants, and trading between economic interests and ecologic concerns. In this contribution we propose an analytical framework that allows for the estimate of the streamflow availability for hydropower production and the selection of the run-of-river plant capacity, as well as the assessment of the related profitability and environmental impacts. The method highlights the key role of the streamflow variability in the design process, by showing the significance control of the coefficient of variation of daily flows on the duration of the optimal capacity of small run-of-river plants. Moreover, the analysis evidences a gap between energy and economic optimizations, which may result in the under-exploitation of the available hydropower potential at large scales. The disturbances to the natural flow regime produced between the intake and the outflow of run-of-river power plants are also estimated within the proposed framework. The altered hydrologic regime, described through the probability distribution and the correlation function of streamflows, is analytically expressed as a function of the natural regime for different management strategies. The deviations from pristine conditions of a set of hydrologic statistics are used, jointly with an economic index, to compare environmental and economic outcomes of alternative plant setups and management strategies. Benefits connected to ecosystem services provided by unimpaired riverine environments can be also included in the analysis, possibly accounting for the disruptive effect of multiple run-of-river power plants built in cascade along the same river. The application to case studies in the Alpine region shows the potential of the tool to assess different management strategies and design solution, and to evaluate local and catchment scale impacts of small run-of-river hydropower development.

  20. Gravitational Instability of Small Particles in Stratified Dusty Disks

    NASA Astrophysics Data System (ADS)

    Shi, J.; Chiang, E.

    2012-12-01

    Self-gravity is an attractive means of forming the building blocks of planets, a.k.a. the first-generation planetesimals. For ensembles of dust particles to aggregate into self-gravitating, bound structures, they must first collect into regions of extraordinarily high density in circumstellar gas disks. We have modified the ATHENA code to simulate dusty, compressible, self-gravitating flows in a 3D shearing box configuration, working in the limit that dust particles are small enough to be perfectly entrained in gas. We have used our code to determine the critical density thresholds required for disk gas to undergo gravitational collapse. In the strict limit that the stopping times of particles in gas are infinitesimally small, our numerical simulations and analytic calculations reveal that the critical density threshold for gravitational collapse is orders of magnitude above what has been commonly assumed. We discuss how finite but still short stopping times under realistic conditions can lower the threshold to a level that may be attainable. Nonlinear development of gravitational instability in a stratified dusty disk. Shown are volume renderings of dust density for the bottom half of a disk at t=0, 6, 8, and 9 Omega^{-1}. The initial disk first develops shearing density waves. These waves then steep and form long extending filament along the azimuth. These filaments eventually break and form very dense dust clumps. The time evolution of the maximum dust density within the simulation box. Run std32 stands for a standard run which has averaged Toomre's Q=0.5. Qgtrsim 1.0 for the rest runs in the plot (Z1 has twice metallicity than the standard; Q1 has twice Q_g, the Toomre's Q for the gas disk alone; M1 has twice the dust-to-gas ratio than the standard at the midplane; R1 is constructed so that the midplane density exceeds the Roche criterion however the Toomre's Q is above unity.)

  1. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  2. Metabolic Factors Limiting Performance in Marathon Runners

    PubMed Central

    Rapoport, Benjamin I.

    2010-01-01

    Each year in the past three decades has seen hundreds of thousands of runners register to run a major marathon. Of those who attempt to race over the marathon distance of 26 miles and 385 yards (42.195 kilometers), more than two-fifths experience severe and performance-limiting depletion of physiologic carbohydrate reserves (a phenomenon known as ‘hitting the wall’), and thousands drop out before reaching the finish lines (approximately 1–2% of those who start). Analyses of endurance physiology have often either used coarse approximations to suggest that human glycogen reserves are insufficient to fuel a marathon (making ‘hitting the wall’ seem inevitable), or implied that maximal glycogen loading is required in order to complete a marathon without ‘hitting the wall.’ The present computational study demonstrates that the energetic constraints on endurance runners are more subtle, and depend on several physiologic variables including the muscle mass distribution, liver and muscle glycogen densities, and running speed (exercise intensity as a fraction of aerobic capacity) of individual runners, in personalized but nevertheless quantifiable and predictable ways. The analytic approach presented here is used to estimate the distance at which runners will exhaust their glycogen stores as a function of running intensity. In so doing it also provides a basis for guidelines ensuring the safety and optimizing the performance of endurance runners, both by setting personally appropriate paces and by prescribing midrace fueling requirements for avoiding ‘the wall.’ The present analysis also sheds physiologically principled light on important standards in marathon running that until now have remained empirically defined: The qualifying times for the Boston Marathon. PMID:20975938

  3. Gender difference and age-related changes in performance at the long-distance duathlon.

    PubMed

    Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver

    2013-02-01

    The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.

  4. Fast gradient separation by very high pressure liquid chromatography: reproducibility of analytical data and influence of delay between successive runs.

    PubMed

    Stankovicha, Joseph J; Gritti, Fabrice; Beaver, Lois Ann; Stevensona, Paul G; Guiochon, Georges

    2013-11-29

    Five methods were used to implement fast gradient separations: constant flow rate, constant column-wall temperature, constant inlet pressure at moderate and high pressures (controlled by a pressure controller),and programmed flow constant pressure. For programmed flow constant pressure, the flow rates and gradient compositions are controlled using input into the method instead of the pressure controller. Minor fluctuations in the inlet pressure do not affect the mobile phase flow rate in programmed flow. There producibilities of the retention times, the response factors, and the eluted band width of six successive separations of the same sample (9 components) were measured with different equilibration times between 0 and 15 min. The influence of the length of the equilibration time on these reproducibilities is discussed. The results show that the average column temperature may increase from one separation to the next and that this contributes to fluctuation of the results.

  5. Isotemporal Substitution Paradigm for Physical Activity Epidemiology and Weight Change

    PubMed Central

    Willett, Walter C.; Hu, Frank B.; Ding, Eric L.

    2009-01-01

    For a fixed amount of time engaged in physical activity, activity choice may affect body weight differently depending partly on other activities’ displacement. Typical models used to evaluate effects of physical activity on body weight do not directly address these substitutions. An isotemporal substitution paradigm was developed as a new analytic model to study the time-substitution effects of one activity for another. In 1991–1997, the authors longitudinally examined the associations of discretionary physical activities, with varying activity displacements, with 6-year weight loss maintenance among 4,558 healthy, premenopausal US women who had previously lost >5% of their weight. Results of isotemporal substitution models indicated widely heterogeneous relations with each physical activity type (P < 0.001) depending on the displaced activities. Notably, whereas 30 minutes/day of brisk walking substituted for 30 minutes/day of jogging/running was associated with weight increase (1.57 kg, 95% confidence interval: 0.33, 2.82), brisk walking was associated with lower weight when substituted for slow walking (−1.14 kg, 95% confidence interval: −1.75, −0.53) and with even lower weight when substituted for TV watching. Similar heterogeneous relations with weight change were found for each activity type (TV watching, slow walking, brisk walking, jogging/running) when displaced by other activities across these various models. The isotemporal substitution paradigm may offer new insights for future public health recommendations. PMID:19584129

  6. Precompetition warm-up in elite and subelite rhythmic gymnastics.

    PubMed

    Guidetti, Laura; Di Cagno, Alessandra; Gallotta, Maria Chiara; Battaglia, Claudia; Piazza, Marina; Baldari, Carlo

    2009-09-01

    The aim of this study was to investigate which precompetition warm-up methodologies resulted in the best overall performance in rhythmic gymnastics. The coaches of national and international clubs (60 elite and 90 subelite) were interviewed. The relationship between sport performance and precompetition warm-up routines was examined. A total of 49% of the coaches interviewed spent more than 1 hour to prepare their athletes for the competition, including 45 minutes dedicated to warm-up exercises. In spite of previous studies' suggestions, the time between the end of warm-up and the beginning of competition was more than 5 minutes for 68% of those interviewed. A slow run was the activity of choice used to begin the warm-up (96%). Significant differences between elite and subelite gymnasts were found concerning the total duration of warm-up, duration of slow running, utilization of rhythmic steps and leaps during the warm-up, the use of dynamic flexibility exercises, competition performances repetition (p < 0.01), and utilization of imagery (p < 0.05). A precompetition warm-up in rhythmic gymnastics would include static stretching exercises at least 60 minutes prior to the competition starting time and the active stretching exercises alternated with analytic muscle strengthening aimed at increasing muscle temperature. Rhythmic gymnastics coaches at all levels can use this data as a review of precompetition warm-up practices and a possible source of new ideas.

  7. Hydrophilic interaction liquid chromatography/positive ion electrospray ionization mass spectrometry method for the quantification of alprazolam and α-hydroxy-alprazolam in human plasma.

    PubMed

    Kalogria, Eleni; Pistos, Constantinos; Panderi, Irene

    2013-12-30

    A hydrophilic interaction liquid chromatography/positive ion electrospray-mass spectrometry (HILIC-ESI/MS) has been developed and fully validated for the quantification of alprazolam and its main metabolite, α-hydroxy-alprazolam, in human plasma. The assay is based on 50μL plasma samples, following liquid-liquid extraction. All analytes and the internal standard (tiamulin) were separated by hydrophilic interaction liquid chromatography using an X-Bridge-HILIC analytical column (150.0mm×2.1mm i.d., particle size 3.5μm) under isoscratic elution. The mobile phase was composed of a 7% 10mM ammonium formate water solution in acetonitrile and pumped at a flow rate of 0.20mLmin(-1). Running in positive electrospray ionization and selected ion monitoring (SIM) the mass spectrometer was set to analyze the protonated molecules [M+H](+) at m/z 309, 325 and 494 for alprazolam, α-hydroxy-alprazolam and tiamulin (ISTD) respectively. The assay was linear over the concentration range of 2.5-250ngmL(-1) for alprazolam and 2.5-50ngmL(-1) for α-hydroxy alprazolam. Intermediate precision was less than 4.1% over the tested concentration ranges. The method is the first reported application of HILIC in the analysis benzodiazepines in human plasma. With a small sample size (50μL human plasma) and a run time less than 10.0min for each sample the method can be used to support a wide range of clinical studies concerning alprazolam quantification. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  9. Development of a method for the determination of cocaine, cocaethylene and norcocaine in human breast milk using liquid phase microextraction and gas chromatography-mass spectrometry.

    PubMed

    Silveira, Gabriela de Oliveira; Belitsky, Íris Tikkanen; Loddi, Silvana; Rodrigues de Oliveira, Carolina Dizioli; Zucoloto, Alexandre Dias; Fruchtengarten, Ligia Veras Gimenez; Yonamine, Mauricio

    2016-08-01

    Most licit and illicit substances consumed by the nursing mother might be excreted in breast milk, which may cause potential short and long term harmful effects for the breastfed infant. The extraction of substances from this matrix represents an analytical challenge due to its high protein and fat content as well as the fact that its composition changes during postpartum period. The aim of the present study was to develop a liquid phase microextraction (LPME) method for detection of the active substances: cocaine (COC), cocaethylene (CE) and norcocaine (NCOC) in human breast milk using gas chromatography-mass spectrometry (GC-MS). Validation was performed working on spiked human breast milk samples. The limits of detection (LOD) and quantification (LOQ) were of 6 and 12ng/mL, respectively, for all analytes. Calibration curves were linear over a concentration range of 12.0ng/mL-1000ng/mL (r(2)=0.99). No interferences were noticed at the retention times of interest. Within-run and between-run precision was always less or equal to 15 as % relative standard deviation, and bias ranged from 3 to 18%. Forty six milk samples were analyzed. Only one sample was confirmed to be COC positive (138ng/mL) and another one presented COC concentration near the LOD (6ng/mL). This method has shown to be a reliable alternative for the determination of cocaine, cocaethylene and norcocaine in human breast milk in the fields of clinical and forensic toxicology. LPME extraction procedure demonstrated to be a rather promising, low cost and environmental-friendly technique for the purpose of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Monolithic column modified with bifunctional ionic liquid and styrene stationary phases for capillary electrochromatography.

    PubMed

    Mao, Zhenkun; Chen, Zilin

    2017-01-13

    A novel monolithic column with ionic liquid and styrene-modified bifunctional group was prepared for capillary electrochromatography (CEC) by in situ copolymerization in a ternary porogenic solvent. Ionic liquid (1-allyl-methylimidazolium chloride, AlMeIm + Cl - ) and styrene served as the bifunctional monomer, while ethylene dimethacrylate (EDMA) was used as the cross-linker. The monomer of AlMeIm + Cl - was introduced as anion-exchange group, while styrene as hydrophobic and aromatic group; the similar conjugated structure in AlMeIm + Cl - and styrene was beneficial for offeing obvious synergistic effect. The bifunctional stationary phase possessed powerful selectivity for the separation of neutral compounds, acidic analytes and phenols. The highest column efficiency was 2.70×10 5 platesm -1 (theoretical plates, N) for toluene. A relatively strong electroosmotic flow (EOF) was obtained in a wide range of pH values from 2.0 to 12.0, which could successfully achieve the rapid separation of the analytes within 10min. The proposed monolithic column was characterized by scanning electron microscopy (SEM) and Fourier transform infrared (FT-IR). The results indicated that the resultant monolithic column had good permeability and excellent mechanical stability. Good reproducibility was obtained with relative standard deviations (RSDs) of the retention time in the range of 0.24-0.47% and 0.81-2.17% for run-to-run (n=5) and day-to-day (n=5), while 1.09-2.70% and 0.98-1.70% for column-to-column (n=3) and batch-to-batch (n=3), respectively. The combination of AlMeIm + Cl - and styrene was a promising option in the fabrication of the organic polymer monolithic column. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Development and validation of a high-throughput stereoselective LC-MS/MS assay for bupropion, hydroxybupropion, erythrohydrobupropion, and threohydrobupropion in human plasma

    PubMed Central

    Teitelbaum, Aaron M.; Flaker, Alicia M.; Kharasch, Evan D.

    2016-01-01

    A stereoselective analytical method was developed and validated for the quantification of bupropion, and principle metabolites hydroxybupropion, erythrohydrobupropion and threohydrobupropion in human plasma. Separation of individual enantiomers (R)-bupropion, (S)-bupropion, (R,R)-hydroxybupropion, (S,S-hydroxybupropion), (1S,2S)-threohydrobupropion, (1R,2R)-threohydrobupropion, (1R,2S)-erythrohydrobupropion, and (1S,2R)-erythrohydrobupropion was achieved utilizing an α1-acid glycoprotein column within a 12-minute run time. Chromatograph separation was significantly influenced by mobile phase pH and variability between columns. Analytes were quantified by positive ion electrospray tandem mass spectrometry following plasma protein precipitation with 20% trichloroacetic acid. Identification of erythrohydrobupropion enantiomer peaks and threohydrobupropion enantiomer peaks was achieved by sodium borohydride reduction of enantiopure (R)- and (S)-bupropion. Initial assay validation and sensitivity determination was on AB Sciex 3200, 4000 QTRAP, and 6500 mass spectrometers. Accuracy and precision were within 15% for each analyte. The assay was fully validated over analyte-specific concentrations using an AB Sciex 3200 mass spectrometer. Intra- and inter-assay precision and accuracy were within 12% for each analyte. The limits of quantification for bupropion (R and S), hydroxybupropion (R,R and S,S), threohydrobupropion (1S,2S and 1R,2R), and erythrohydrobupropion (1R,2S and 1S,2R) were 0.5, 2, 1, and 1 ng/mL, respectively. All analytes were stable following freeze thaw cycles at −80°C and while stored at 4°C in the instrument autosampler. This method was applicable to clinical pharmacokinetic investigations of bupropion in patients. This is the first chromatographic method to resolve erythrohydrobupropion and threohydrobupropion enantiomers, and the first stereoselective LC-MS/MS assay to quantify bupropion, and principle metabolites hydroxybupropion, erythrohydrobupropion, and threohydrobupropion in human plasma. PMID:26963497

  12. The Abbott Architect c8000: analytical performance and productivity characteristics of a new analyzer applied to general chemistry testing.

    PubMed

    Pauli, Daniela; Seyfarth, Michael; Dibbelt, Leif

    2005-01-01

    Applying basic potentiometric and photometric assays, we evaluated the fully automated random access chemistry analyzer Architect c8000, a new member of the Abbott Architect system family, with respect to both its analytical and operational performance and compared it to an established high-throughput chemistry platform, the Abbott Aeroset. Our results demonstrate that intra- and inter-assay imprecision, inaccuracy, lower limit of detection and linear range of the c8000 generally meet actual requirements of laboratory diagnosis; there were only rare exceptions, e.g. assays for plasma lipase or urine uric acid which apparently need to be improved by additional rinsing of reagent pipettors. Even with plasma exhibiting CK activities as high as 40.000 U/l, sample carryover by the c8000 could not be detected. Comparison of methods run on the c8000 and the Aeroset revealed correlation coefficients of 0.98-1.00; if identical chemistries were applied on both analyzers, slopes of regression lines approached unity. With typical laboratory workloads including 10-20% STAT samples and up to 10% samples with high analyte concentrations demanding dilutional reruns, steady-state throughput numbers of 700 to 800 tests per hour were obtained with the c8000. The system generally responded to STAT orders within 2 minutes yielding analytical STAT order completion times of 5 to 15 minutes depending on the type and number of assays requested per sample. Due to its extended test and sample processing capabilities and highly comfortable software, the c8000 may meet the varying needs of clinical laboratories rather well.

  13. Development and validation of a simple UHPLC-MS/MS method for the simultaneous determination of trimethylamine N-oxide, choline, and betaine in human plasma and urine.

    PubMed

    Ocque, Andrew J; Stubbs, Jason R; Nolin, Thomas D

    2015-05-10

    A simple, sensitive, and precise ultra-high performance liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of trimethylamine N-oxide, choline, and betaine in human plasma and urine. Sample preparation involved protein precipitation with methanol containing internal standards. Chromatographic separation was achieved using an Acquity BEH Amide (2.1mm×50mm, 1.7μm) analytical column with gradient elution of solvent A (10mM ammonium formate, pH 3.5) and solvent B (acetonitrile). The flow rate was 0.4mL/min and the total run time was 5min. Detection of analytes was performed using heated electrospray ionization (positive mode) and selected reaction monitoring. Excellent linearity was observed over the standard curve concentration ranges of 0.010-5.00μg/mL (plasma) and 1.00-150μg/mL (urine) for all analytes. The intra- and inter-day accuracy and precision for all quality controls were within ±10%. Excellent recovery was observed. The method is rapid, accurate and reproducible, and was successfully applied to a pilot study of markers of atherosclerosis in patients with kidney disease who underwent successful kidney transplantation. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Supercritical fluid chromatography versus high performance liquid chromatography for enantiomeric and diastereoisomeric separations on coated polysaccharides-based stationary phases: Application to dihydropyridone derivatives.

    PubMed

    Hoguet, Vanessa; Charton, Julie; Hecquet, Paul-Emile; Lakhmi, Chahinaze; Lipka, Emmanuelle

    2018-05-11

    For analytical applications, SFC has always remained in the shadow of LC. Analytical enantioseparation of eight dihydropyridone derivatives, was run in both High Performance Liquid Chromatography and Supercritical Fluid Chromatography. Four polysaccharide based chiral stationary phases namely amylose and cellulose tris(3, 5-dimethylphenylcarbamate), amylose tris((S)-α-phenylethylcarbamate) and cellulose tris(4-methylbenzoate) with four mobile phases consisted of either n-hexane/ethanol or propan-2-ol (80:20 v:v) or carbon dioxide/ethanol or propan-2-ol (80:20 v:v) mixtures were investigated under same operatory conditions (temperature and flow-rate). The elution strength, enantioselectivity and resolution were compared in the two methodologies. For these compounds, for most of the conditions, HPLC afforded shorter retention times and a higher resolution than SFC. HPLC appears particularly suitable for the separation of the compounds bearing two chiral centers. For instance compound 7 was baseline resolved on OD-H CSP under n-Hex/EtOH 80/20, with resolution values equal to 2.98, 1.55, 4.52, between the four stereoisomers in less than 17 min, whereas in SFC, this latter is not fully separated in 23 min under similar eluting conditions. After analytical screenings, the best conditions were transposed to semi-preparative scale. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. A Label-Free Porous Silicon Immunosensor for Broad Detection of Opiates in a Blind Clinical Study and Result Comparison to Commercial Analytical Chemistry Techniques

    PubMed Central

    Bonanno, Lisa M.; Kwong, Tai C.; DeLouise, Lisa A.

    2010-01-01

    In this work we evaluate for the first time the performance of a label-free porous silicon (PSi) immunosensor assay in a blind clinical study designed to screen authentic patient urine specimens for a broad range of opiates. The PSi opiate immunosensor achieved 96% concordance with liquid chromatography-mass spectrometry/tandem mass spectrometry (LC-MS/MS) results on samples that underwent standard opiate testing (n=50). In addition, successful detection of a commonly abused opiate, oxycodone, resulted in 100% qualitative agreement between the PSi opiate sensor and LC-MS/MS. In contrast, a commercial broad opiate immunoassay technique (CEDIA®) achieved 65% qualitative concordance with LC-MS/MS. Evaluation of important performance attributes including precision, accuracy, and recovery was completed on blank urine specimens spiked with test analytes. Variability of morphine detection as a model opiate target was < 9% both within-run and between-day at and above the cutoff limit of 300 ng ml−1. This study validates the analytical screening capability of label-free PSi opiate immunosensors in authentic patient samples and is the first semi-quantitative demonstration of the technology’s successful clinical use. These results motivate future development of PSi technology to reduce complexity and cost of diagnostic testing particularly in a point-of-care setting. PMID:21062030

  16. Analytical method development for directed enzyme evolution research: a high throughput high-performance liquid chromatography method for analysis of ribose and ribitol and a capillary electrophoresis method for the separation of ribose enantiomers.

    PubMed

    Sun, Baoguo; Miller, Gregory; Lee, Wan Yee; Ho, Kelvin; Crowe, Michael A; Partridge, Leslie

    2013-01-04

    Analytical methods were developed for a directed enzyme evolution research programme, which pursued high performance enzymes to produce high quality L-ribose using large scale biocatalytic reaction. A high throughput HPLC method with evaporative light-scattering detection was developed to test ribose and ribitol in the enzymatic reaction, a β-cyclobond 2000 analytical column separated ribose and ribitol in 2.3 min, a C(18) guard column was used as an on-line filter to clean up the enzyme sample matrix and a short gradient was applied to wash the column, the enzymatic reaction solution can be directly injected after quenching. Total run time of each sample was approx. 4 min which provided capability of screening 4×96-well plates/day/instrument. Meanwhile, a capillary electrophoresis method was developed for the separation of ribose enantiomers, while 7-aminonaphthalene-1,3-disulfonic acid was used as derivatisation reagent and 25 mM tetraborate with 5 mM β-cyclodextrin was used as electrolyte. 0.35%of D-ribose in L-ribose can be detected which can be translated into 99.3% ee of L-ribose. Derivatisation reagent and sample matrix did not interfere with the measurement. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Mean platelet volume (MPV) predicts middle distance running performance.

    PubMed

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running performance. The significant association between baseline MPV and running time suggest that hyperactive platelets may exert some pleiotropic effects on endurance performance.

  18. Thermal behavior spiral bevel gears. Ph.D. Thesis - Case Western Univ., Aug. 1993

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    1995-01-01

    An experimental and analytical study of the thermal behavior of spiral bevel gears is presented. Experimental data were taken using thermocoupled test hardware and an infrared microscope. Many operational parameters were varied to investigate their effects on the thermal behavior. The data taken were also used to validate the boundary conditions applied to the analytical model. A finite element-based solution sequence was developed. The three-dimensional model was developed based on the manufacturing process for these gears. Contact between the meshing gears was found using tooth contact analysis to describe the location, curvatures, orientations, and surface velocities. This information was then used in a three-dimensional Hertzian contact analysis to predict contact ellipse size and maximum pressure. From these results, an estimate of the heat flux magnitude and the location on the finite element model was made. The finite element model used time-averaged boundary conditions to permit the solution to attain steady state in a computationally efficient manner.Then time- and position-varying boundary conditions were applied to the model to analyze the cyclic heating and cooling due to the gears meshing and transferring heat to the surroundings, respectively. The model was run in this mode until the temperature behavior stabilized. The transient flash temperature on the surface was therefore described. The analysis can be used to predict the overall expected thermal behavior of spiral bevel gears. The experimental and analytical results were compared for this study and also with a limited number of other studies. The experimental and analytical results attained in the current study were basically within 10% of each other for the cases compared. The experimental comparison was for bulk thermocouple locations and data taken with an infrared microscope. The results of a limited number of other studies were compared with those obtained herein and predicted the same basic behavior.

  19. Rockets for spin recovery

    NASA Technical Reports Server (NTRS)

    Whipple, R. D.

    1980-01-01

    The potential effectiveness of rockets as an auxiliary means for an aircraft to effect recovery from spins was investigated. The advances in rocket technology produced by the space effort suggested that currently available systems might obviate many of the problems encountered in earlier rocket systems. A modern fighter configuration known to exhibit a flat spin mode was selected. An analytical study was made of the thrust requirements for a rocket spin recovery system for the subject configuration. These results were then applied to a preliminary systems study of rocket components appropriate to the problem. Subsequent spin tunnel tests were run to evaluate the analytical results.

  20. Integrated Rapid-Diagnostic-Test Reader Platform on a Cellphone

    PubMed Central

    Mudanyali, Onur; Dimitrov, Stoyan; Sikora, Uzair; Padmanabhan, Swati; Navruz, Isa; Ozcan, Aydogan

    2012-01-01

    We demonstrate a cellphone based Rapid-Diagnostic-Test (RDT) reader platform that can work with various lateral flow immuno-chromatographic assays and similar tests to sense the presence of a target analyte in a sample. This compact and cost-effective digital RDT reader, weighing only ~65 grams, mechanically attaches to the existing camera unit of a cellphone, where various types of RDTs can be inserted to be imaged in reflection or transmission modes under light-emitting-diode (LED) based illumination. Captured raw images of these tests are then digitally processed (within less than 0.2 sec/image) through a smart application running on the cellphone for validation of the RDT as well as for automated reading of its diagnostic result. The same smart application running on the cellphone then transmits the resulting data, together with the RDT images and other related information (e.g., demographic data) to a central server, which presents the diagnostic results on a world-map through geo-tagging. This dynamic spatio-temporal map of various RDT results can then be viewed and shared using internet browsers or through the same cellphone application. We tested this platform using malaria, tuberculosis (TB) as well as HIV RDTs by installing it on both Android based smart-phones as well as an iPhone. Providing real-time spatio-temporal statistics for the prevalence of various infectious diseases, this smart RDT reader platform running on cellphones might assist health-care professionals and policy makers to track emerging epidemics worldwide and help epidemic preparedness. PMID:22596243

  1. Governing Laws of Complex System Predictability under Co-evolving Uncertainty Sources: Theory and Nonlinear Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Perdigão, R. A. P.

    2017-12-01

    Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.

  2. A comparative assessment of tools for ecosystem services quantification and valuation

    USGS Publications Warehouse

    Bagstad, Kenneth J.; Semmens, Darius; Waage, Sissel; Winthrop, Robert

    2013-01-01

    To enter widespread use, ecosystem service assessments need to be quantifiable, replicable, credible, flexible, and affordable. With recent growth in the field of ecosystem services, a variety of decision-support tools has emerged to support more systematic ecosystem services assessment. Despite the growing complexity of the tool landscape, thorough reviews of tools for identifying, assessing, modeling and in some cases monetarily valuing ecosystem services have generally been lacking. In this study, we describe 17 ecosystem services tools and rate their performance against eight evaluative criteria that gauge their readiness for widespread application in public- and private-sector decision making. We describe each of the tools′ intended uses, services modeled, analytical approaches, data requirements, and outputs, as well time requirements to run seven tools in a first comparative concurrent application of multiple tools to a common location – the San Pedro River watershed in southeast Arizona, USA, and northern Sonora, Mexico. Based on this work, we offer conclusions about these tools′ current ‘readiness’ for widespread application within both public- and private-sector decision making processes. Finally, we describe potential pathways forward to reduce the resource requirements for running ecosystem services models, which are essential to facilitate their more widespread use in environmental decision making.

  3. RAY-RAMSES: a code for ray tracing on the fly in N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barreira, Alexandre; Llinares, Claudio; Bose, Sownak

    2016-05-01

    We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less

  4. New approaches with two cyano columns to the separation of acetaminophen, phenylephrine, chlorpheniramine and related compounds.

    PubMed

    Olmo, B; García, A; Marín, A; Barbas, C

    2005-03-25

    The development of new pharmaceutical forms with classical active compounds generates new analytical problems. That is the case of sugar-free sachets of cough-cold products containing acetaminophen, phenylephrine hydrochloride and chlorpheniramine maleate. Two cyanopropyl stationary phases have been employed to tackle the problem. The Discovery cyanopropyl (SUPELCO) column permitted the separation of the three actives, maleate and excipients (mainly saccharine and orange flavour) with a constant proportion of aqueous/ organic solvent (95:5, v/v) and a pH gradient from 7.5 to 2. The run lasted 14 min. This technique avoids many problems related to baseline shifts with classical organic solvent gradients and opens great possibilities to modify selectivity not generally used in reversed phase HPLC. On the other hand, the Agilent Zorbax SB-CN column with a different retention profile permitted us to separate not only the three actives and the excipients but also the three known related compounds: 4-aminophenol, 4-chloracetanilide and 4-nitrophenol in an isocratic method with a run time under 30 min. This method was validated following ICH guidelines and validation parameters showed that it could be employed as stability-indicating method for this pharmaceutical form.

  5. Ultra Performance Liquid Chromatography with Tandem Mass Spectrometry for the Quantitation of Seventeen Sedative Hypnotics in Six Common Toxicological Matrices.

    PubMed

    Mata, Dani C; Davis, John F; Figueroa, Ariana K; Stanford, Mary June

    2016-01-01

    An ultra performance liquid chromatography triple quadrupole mass spectrometry (LC-MS-MS) method for the quantification of 14 benzodiazepines and three sedative hypnotics is presented. The fast and inexpensive assay was developed for California's Orange County Crime Lab for use in antemortem (AM) and postmortem casework. The drugs were rapidly cleaned up from AM blood, postmortem blood, urine, liver, brain and stomach contents using DPX(®) Weak Anion Exchange (DPX WAX) tips fitted on a pneumatic extractor, which can process up to 48 samples at one time. Assay performance was determined for validation based on recommendations by the Scientific Working Group for Forensic Toxicology for linearity, limit of quantitation, limit of detection, bias, precision (within run and between run), dilution integrity, carry-over, selectivity, recovery, ion suppression and extracted sample stability. Linearity was verified using the therapeutic and toxic ranges of all 17 analytes. Final verification of the method was confirmed by four analysts using 20 blind matrix matched samples. All results were within 20% of each other and the expected value. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Quantitation of phlorizin and phloretin using an ultra high performance liquid chromatography-electrospray ionization tandem mass spectrometric method.

    PubMed

    Lijia, Xu; Guo, Jianru; Chen, QianQian; Baoping, Jiang; Zhang, Wei

    2014-06-01

    A sensitive and selective ultra high performance liquid chromatography-tandem mass spectrometric (UHPLC-MS/MS) method for the determination of phlorizin and phloretin in human plasma has been firstly developed. Samples were prepared after protein precipitation and analyzed on a C18 column interfaced with a triple quadrupole tandem mass spectrometer. Negative electrospray ionization was employed as the ionization source. The mobile phase consisted of acetonitrile-water (0.02% formic acid), using a gradient procedure. The analytes and internal standard dihydroquercetin were both detected by use of multiple reaction monitoring mode. The method was linear in the concentration range of 2.5-1000.0 ng/mL. The lower limit of quantification (LLOQ) was 2.5 ng/mL. The intra- and inter-day relative standard deviation across three validation runs over the entire concentration range was less than 9.2%. The accuracy determined at three concentrations was within ± 7.3% in terms of relative error. The total run time was 12.0 min. This assay offers advantages in terms of expediency, and suitability for the analysis of phlorizin and phloretin in various biological fluids. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Preparation and evaluation of open-tubular capillary columns modified with metal-organic framework incorporated polymeric porous layer for liquid chromatography.

    PubMed

    Zhu, Manman; Zhang, Lingyi; Chu, Zhanying; Wang, Shulei; Chen, Kai; Zhang, Weibing; Liu, Fan

    2018-07-01

    An open tubular capillary liquid phase chromatographic column (1 m × 25 µm i.d.× 375 µm o.d.) was prepared by incorporating metal organic framework particles modified with vancomycin directly into zwitterionic polymer coating synthesized by the copolymerization of [2-(methacryloyloxy)ethyl]dimethyl-(3-sulfopropyl) ammonium hydroxide and N,N'-methylenebisacrylamide. The incorporation of IRMOF-3 (isoreticular metal organic framework-3) particles improved selectivity of zwitterionic polymer coating with absolute column efficiency reaching 79900 plates for p-xylene. Besides strong hydrophilic interaction, the separation of neutral, basic, and acidic compounds demonstrates that π-π stacking interaction and the coordination effect of unsaturated Zn 2+ of MOF also contribute to separation of various analytes. The RSD values (run-to-run, day-to-day, column-to-column, n = 3) of retention time of neutral compounds were less than 0.71%, 0.69% and 3.08% respectively, suggesting good repeatability. In addition, the column was applied to the analysis of the trypsin digest of bovine serum albumin, revealing the potential in separating biological samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Dual dispersive liquid-liquid microextraction for determination of phenylpropenes in oils by gas chromatography-mass spectrometry.

    PubMed

    Tsai, Chia-Ju; Li, Jih-Heng; Feng, Chia-Hsien

    2015-09-04

    A novel, simple and quick sample preparation method was developed and used for pre-concentration and extraction of six phenylpropenes, including anethole, estragole, eugenol, methyl eugenol, safrole and myristicin, from oil samples by dual dispersive liquid-liquid microextraction. Gas chromatography-mass spectrometry was used for determination and separation of compounds. Several experimental parameters affecting extraction efficiency were evaluated and optimized, including forward-extractant type and volume, surfactant type and concentration, water volume, and back-extractant type and volume. For all analytes (10-1000ng/mL), the limits of detection (S/N≧3) ranged from 1.0 to 3.0ng/mL; the limits of quantification (S/N≧10) ranged from 2.5 to 10.0ng/mL; and enrichment factors ranged from 3.2 to 37.1 times. Within-run and between-run relative standard deviations (n=6) were less than 2.61% and less than 4.33%, respectively. Linearity was excellent with determination coefficients (r(2)) above 0.9977. The experiments showed that the proposed method is a simple, effective, and environmentally friendly method of analyzing phenylpropenes in oil samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. On-line sequential injection-capillary electrophoresis for near-real-time monitoring of extracellular lactate in cell culture flasks.

    PubMed

    Alhusban, Ala A; Gaudry, Adam J; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2014-01-03

    Cell culture has replaced many in vivo studies because of ethical and regulatory measures as well as the possibility of increased throughput. Analytical assays to determine (bio)chemical changes are often based on end-point measurements rather than on a series of sequential determinations. The purpose of this work is to develop an analytical system for monitoring cell culture based on sequential injection-capillary electrophoresis (SI-CE) with capacitively coupled contactless conductivity detection (C(4)D). The system was applied for monitoring lactate production, an important metabolic indicator, during mammalian cell culture. Using a background electrolyte consisting of 25mM tris(hydroxymethyl)aminomethane, 35mM cyclohexyl-2-aminoethanesulfonic acid with 0.02% poly(ethyleneimine) (PEI) at pH 8.65 and a multilayer polymer coated capillary, lactate could be resolved from other compounds present in media with relative standard deviations 0.07% for intraday electrophoretic mobility and an analysis time of less than 10min. Using the human embryonic kidney cell line HEK293, lactate concentrations in the cell culture medium were measured every 20min over 3 days, requiring only 8.73μL of sample per run. Combining simplicity, portability, automation, high sample throughput, low limits of detection, low sample consumption and the ability to up- and outscale, this new methodology represents a promising technique for near real-time monitoring of chemical changes in diverse cell culture applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) for intraoperative neurochemical monitoring.

    PubMed

    Kimble, Christopher J; Johnson, David M; Winter, Bruce A; Whitlock, Sidney V; Kressin, Kenneth R; Horne, April E; Robinson, Justin C; Bledsoe, Jonathan M; Tye, Susannah J; Chang, Su-Youne; Agnesi, Filippo; Griessenauer, Christoph J; Covey, Daniel; Shon, Young-Min; Bennet, Kevin E; Garris, Paul A; Lee, Kendall H

    2009-01-01

    The Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) measures extracellular neurotransmitter concentration in vivo and displays the data graphically in nearly real time. WINCS implements two electroanalytical methods, fast-scan cyclic voltammetry (FSCV) and fixed-potential amperometry (FPA), to measure neurotransmitter concentrations at an electrochemical sensor, typically a carbon-fiber microelectrode. WINCS comprises a battery-powered patient module and a custom software application (WINCSware) running on a nearby personal computer. The patient module impresses upon the electrochemical sensor either a constant potential (for FPA) or a time-varying waveform (for FSCV). A transimpedance amplifier converts the resulting current to a signal that is digitized and transmitted to the base station via a Bluetooth radio link. WINCSware controls the operational parameters for FPA or FSCV, and records the transmitted data stream. Filtered data is displayed in various formats, including a background-subtracted plot of sequential FSCV scans - a representation that enables users to distinguish the signatures of various analytes with considerable specificity. Dopamine, glutamate, adenosine and serotonin were selected as analytes for test trials. Proof-of-principle tests included in vitro flow-injection measurements and in vivo measurements in rat and pig. Further testing demonstrated basic functionality in a 3-Tesla MRI unit. WINCS was designed in compliance with consensus standards for medical electrical device safety, and it is anticipated that its capability for real-time intraoperative monitoring of neurotransmitter release at an implanted sensor will prove useful for advancing functional neurosurgery.

  11. Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) for Intraoperative Neurochemical Monitoring

    PubMed Central

    Kimble, Christopher J.; Johnson, David M.; Winter, Bruce A.; Whitlock, Sidney V.; Kressin, Kenneth R.; Horne, April E.; Robinson, Justin C.; Bledsoe, Jonathan M.; Tye, Susannah J.; Chang, Su-Youne; Agnesi, Filippo; Griessenauer, Christoph J.; Covey, Daniel; Shon, Young-Min; Bennet, Kevin E.; Garris, Paul A.; Lee, Kendall H.

    2010-01-01

    The Wireless Instantaneous Neurotransmitter Concentration Sensing System (WINCS) measures extracellular neurotransmitter concentration in vivo and displays the data graphically in nearly real time. WINCS implements two electroanalytical methods, fast-scan cyclic voltammetry (FSCV) and fixed-potential amperometry (FPA), to measure neurotransmitter concentrations at an electrochemical sensor, typically a carbon-fiber microelectrode. WINCS comprises a battery-powered patient module and a custom software application (WINCSware) running on a nearby personal computer. The patient module impresses upon the electrochemical sensor either a constant potential (for FPA) or a time-varying waveform (for FSCV). A transimpedance amplifier converts the resulting current to a signal that is digitized and transmitted to the base station via a Bluetooth® radio link. WINCSware controls the operational parameters for FPA or FSCV, and records the transmitted data stream. Filtered data is displayed in various formats, including a background-subtracted plot of sequential FSCV scans—a representation that enables users to distinguish the signatures of various analytes with considerable specificity. Dopamine, glutamate, adenosine and serotonin were selected as analytes for test trials. Proof-of-principle tests included in vitro flow-injection measurements and in vivo measurements in rat and pig. Further testing demonstrated basic functionality in a 3-Tesla MRI unit. WINCS was designed in compliance with consensus standards for medical electrical device safety, and it is anticipated that its capability for real-time intraoperative monitoring of neurotransmitter release at an implanted sensor will prove useful for advancing functional neurosurgery. PMID:19963865

  12. GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA

    NASA Technical Reports Server (NTRS)

    Stark, M.

    1994-01-01

    Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.

  13. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    NASA Astrophysics Data System (ADS)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  14. A preliminary evaluation of nearhore extreme sea level and wave models for fringing reef environments

    NASA Astrophysics Data System (ADS)

    Hoeke, R. K.; Reyns, J.; O'Grady, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    Oceanic islands are widely perceived as vulnerable to sea level rise and are characterized by steep nearshore topography and fringing reefs. In such settings, near shore dynamics and (non-tidal) water level variability tends to be dominated by wind-wave processes. These processes are highly sensitive to reef morphology and roughness and to regional wave climate. Thus sea level extremes tend to be highly localized and their likelihood can be expected to change in the future (beyond simple extrapolation of sea level rise scenarios): e.g. sea level rise may increase the effective mean depth of reef crests and flats and ocean acidification and/or increased temperatures may lead to changes in reef structure. The problem is sufficiently complex that analytic or numerical approaches are necessary to estimate current hazards and explore potential future changes. In this study, we evaluate the capacity of several analytic/empirical approaches and phase-averaged and phase-resolved numerical models at sites in the insular tropical Pacific. We consider their ability to predict time-averaged wave setup and instantaneous water level exceedance probability (or dynamic wave run-up) as well as computational cost; where possible, we compare the model results with in situ observations from a number of previous studies. Preliminary results indicate analytic approaches are by far the most computationally efficient, but tend to perform poorly when alongshore straight and parallel morphology cannot be assumed. Phase-averaged models tend to perform well with respect to wave setup in such situations, but are unable to predict processes related to individual waves or wave groups, such as infragravity motions or wave run-up. Phase-resolved models tend to perform best, but come at high computational cost, an important consideration when exploring possible future scenarios. A new approach of combining an unstructured computational grid with a quasi-phase averaged approach (i.e. only phase resolving motions below a frequency cutoff) shows promise as a good compromise between computational efficiency and resolving processes such as wave runup and overtopping in more complex bathymetric situations.

  15. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  16. 40 CFR Table 1 to Subpart Llll of... - Emission Limits and Standards for New Fluidized Bed Sewage Sludge Incineration Units

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...

  17. 40 CFR Table 1 to Subpart Llll of... - Emission Limits and Standards for New Fluidized Bed Sewage Sludge Incineration Units

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...

  18. 78 FR 42099 - Announcement of Requirements and Registration for the “Stay Covered Challenge” and the “Churn...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... criteria Analytic design plan includes: selecting sample based on criteria and running descriptive... to behavioral health treatment and recovery. The National Survey on Drug Use and Health estimates... ``Stay Covered Challenge'' calls for the development of a marketing/outreach campaign designed for use by...

  19. Rethinking Intensive Quantities via Guided Mediated Abduction

    ERIC Educational Resources Information Center

    Abrahamson, Dor

    2012-01-01

    Some intensive quantities, such as slope, velocity, or likelihood, are perceptually privileged in the sense that they are experienced as holistic, irreducible sensations. However, the formal expression of these quantities uses "a/b" analytic metrics; for example, the slope of a line is the quotient of its rise and run. Thus, whereas students'…

  20. π-Extended triptycene-based material for capillary gas chromatographic separations.

    PubMed

    Yang, Yinhui; Wang, Qinsi; Qi, Meiling; Huang, Xuebin

    2017-10-02

    Triptycene-based materials feature favorable physicochemical properties and unique molecular recognition ability that offer good potential as stationary phases for capillary gas chromatography (GC). Herein, we report the investigation of utilizing a π-extended triptycene material (denoted as TQPP) for GC separations. As a result, the TQPP capillary column exhibited high column efficiency of 4030 plates m -1 and high-resolution performance for a wide range of analytes, especially structural and positional isomers. Interestingly, the TQPP stationary phase showed unique shape selectivity for alkanes isomers and preferential retention for analytes with halogen atoms and H-bonding nature mainly through their halogen-bonding and H-bonding interactions. In addition, the TQPP column had good repeatability and reproducibility with the RSD values of 0.02-0.34% for run-to-run, 0.09-0.80% for day-to-day and 1.4-5.2% for column-to-column, respectively, and favorable thermal stability up to 280 °C. This work demonstrates the promising future of triptycene-based materials as a new class of stationary phases for GC separations. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Determination of glycerol in oils and fats using liquid chromatography chloride attachment electrospray ionization mass spectrometry.

    PubMed

    Jin, Chunfen; Viidanoja, Jyrki

    2017-01-15

    Existing liquid chromatography - mass spectrometry method for the analysis of short chain carboxylic acids was expanded and validated to cover also the measurement of glycerol from oils and fats. The method employs chloride anion attachment and two ions, [glycerol+ 35 Cl] - and [glycerol+ 37 Cl] - , as alternative quantifiers for improved selectivity of glycerol measurement. The averaged within run precision, between run precision and accuracy ranged between 0.3-7%, 0.4-6% and 94-99%, respectively, depending on the analyte ion and sample matrix. Selected renewable diesel feedstocks were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. mr: A C++ library for the matching and running of the Standard Model parameters

    NASA Astrophysics Data System (ADS)

    Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.

    2016-09-01

    We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS

  3. Emergency radiobioassay preparedness exercises through the NIST radiochemistry intercomparison program.

    PubMed

    Nour, Svetlana; LaRosa, Jerry; Inn, Kenneth G W

    2011-08-01

    The present challenge for the international emergency radiobioassay community is to analyze contaminated samples rapidly while maintaining high quality results. The National Institute of Standards and Technology (NIST) runs a radiobioassay measurement traceability testing program to evaluate the radioanalytical capabilities of participating laboratories. The NIST Radiochemistry Intercomparison Program (NRIP) started more than 10 years ago, and emergency performance testing was added to the program seven years ago. Radiobioassay turnaround times under the NRIP program for routine production and under emergency response scenarios are 60 d and 8 h, respectively. Because measurement accuracy and sample turnaround time are very critical in a radiological emergency, response laboratories' analytical systems are best evaluated and improved through traceable Performance Testing (PT) programs. The NRIP provides participant laboratories with metrology tools to evaluate their performance and to improve it. The program motivates the laboratories to optimize their methodologies and minimize the turnaround time of their results. Likewise, NIST has to make adjustments and periodical changes in the bioassay test samples in order to challenge the participating laboratories continually. With practice, radioanalytical measurements turnaround time can be reduced to 3-4 h.

  4. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    PubMed

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  5. Influence of hydrodynamic thrust bearings on the nonlinear oscillations of high-speed rotors

    NASA Astrophysics Data System (ADS)

    Chatzisavvas, Ioannis; Boyaci, Aydin; Koutsovasilis, Panagiotis; Schweizer, Bernhard

    2016-10-01

    This paper investigates the effect of hydrodynamic thrust bearings on the nonlinear vibrations and the bifurcations occurring in rotor/bearing systems. In order to examine the influence of thrust bearings, run-up simulations may be carried out. To be able to perform such run-up calculations, a computationally efficient thrust bearing model is mandatory. Direct discretization of the Reynolds equation for thrust bearings by means of a Finite Element or Finite Difference approach entails rather large simulation times, since in every time-integration step a discretized model of the Reynolds equation has to be solved simultaneously with the rotor model. Implementation of such a coupled rotor/bearing model may be accomplished by a co-simulation approach. Such an approach prevents, however, a thorough analysis of the rotor/bearing system based on extensive parameter studies. A major point of this work is the derivation of a very time-efficient but rather precise model for transient simulations of rotors with hydrodynamic thrust bearings. The presented model makes use of a global Galerkin approach, where the pressure field is approximated by global trial functions. For the considered problem, an analytical evaluation of the relevant integrals is possible. As a consequence, the system of equations of the discretized bearing model is obtained symbolically. In combination with a proper decomposition of the governing system matrix, a numerically efficient implementation can be achieved. Using run-up simulations with the proposed model, the effect of thrust bearings on the bifurcations points as well as on the amplitudes and frequencies of the subsynchronous rotor oscillations is investigated. Especially, the influence of the magnitude of the axial force, the geometry of the thrust bearing and the oil parameters is examined. It is shown that the thrust bearing exerts a large influence on the nonlinear rotor oscillations, especially to those related with the conical mode of the rotor. A comparison between a full co-simulation approach and a reduced Galerkin implementation is carried out. It is shown that a speed-up of 10-15 times may be obtained with the Galerkin model compared to the co-simulation model under the same accuracy.

  6. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  7. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, Richard

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  8. SevenOperators, a Mathematica script for harmonic oscillator nuclear matrix elements arising in semileptonic electroweak interactions

    NASA Astrophysics Data System (ADS)

    Haxton, Wick; Lunardini, Cecilia

    2008-09-01

    Semi-leptonic electroweak interactions in nuclei—such as β decay, μ capture, charged- and neutral-current neutrino reactions, and electron scattering—are described by a set of multipole operators carrying definite parity and angular momentum, obtained by projection from the underlying nuclear charge and three-current operators. If these nuclear operators are approximated by their one-body forms and expanded in the nucleon velocity through order |p→|/M, where p→ and M are the nucleon momentum and mass, a set of seven multipole operators is obtained. Nuclear structure calculations are often performed in a basis of Slater determinants formed from harmonic oscillator orbitals, a choice that allows translational invariance to be preserved. Harmonic-oscillator single-particle matrix elements of the multipole operators can be evaluated analytically and expressed in terms of finite polynomials in q, where q is the magnitude of the three-momentum transfer. While results for such matrix elements are available in tabular form, with certain restriction on quantum numbers, the task of determining the analytic form of a response function can still be quite tedious, requiring the folding of the tabulated matrix elements with the nuclear density matrix, and subsequent algebra to evaluate products of operators. Here we provide a Mathematica script for generating these matrix elements, which will allow users to carry out all such calculations by symbolic manipulation. This will eliminate the errors that may accompany hand calculations and speed the calculation of electroweak nuclear cross sections and rates. We illustrate the use of the new script by calculating the cross sections for charged- and neutral-current neutrino scattering in 12C. Program summaryProgram title: SevenOperators Catalogue identifier: AEAY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2227 No. of bytes in distributed program, including test data, etc.: 19 382 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer running Mathematica; tested on Mac OS X PowerPC (32-bit) running Mathematica 6.0.0 Operating system: Any running Mathematica RAM: Memory requirements determined by Mathematica; 512 MB or greater RAM and hard drive space of at least 3.0 GB recommended Classification: 17.16, 17.19 Nature of problem: Algebraic evaluation of harmonic oscillator nuclear matrix elements for the one-body multipole operators governing semi-leptonic weak interactions, such as charged- or neutral-current neutrino scattering off nuclei. Solution method: Mathematica evaluation of associated angular momentum algebra and spherical Bessel function radial integrals. Running time: Depends on the complexity of the one-body density matrix employed, but times of a few seconds are typical.

  9. Novel Approaches to the Production of Higher Alcohols From Synthesis Gas. Quarterly report, January 1 - March 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, George W

    1998-12-11

    A modified analytical system was assembled and calibrated, in preparation for a second run with cesium (Cs)-promoted "zinc chromite" catalyst. A new column for the on-line gas chromatography (GC) was purchased for the analysis of various light olefin and paraffin isomers. A run was carried out in the continuous stirred autoclave using the Cs-promoted catalyst. Decahydronaphfialene was used as the slurry liquid. Reaction conditions were 375°C, 2000 psig total pressure, 0.5 H₂/CO ratio, and 5000 sL/Kg (cat.)-hr. Analysis of the data from this run is in progress. A manuscript on the thermal stability of potential slurry liquids was submitted tomore » 'Industrial and Engineering Chemistry Research,' and a paper was presented at the 1997 Spring National Meeting of the American Institute of Chemical Engineers, Houston, Texas.« less

  10. Validated green high-performance liquid chromatographic methods for the determination of coformulated pharmaceuticals: a comparison with reported conventional methods.

    PubMed

    Elzanfaly, Eman S; Hegazy, Maha A; Saad, Samah S; Salem, Maissa Y; Abd El Fattah, Laila E

    2015-03-01

    The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high-performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8-chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high-performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time- and solvent-saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chemerisov, S.; Bailey, J.; Heltemes, T.

    A series of four one-day irradiations was conducted with 100Mo-enriched disk targets. After irradiation, the enriched disks were removed from the target and dissolved. The resulting solution was processed using a NorthStar RadioGenix™ 99mTc generator either at Argonne National Laboratory or at the NorthStar Medical Radioisotopes facility. Runs on the RadioGenix system produced inconsistent analytical results for 99mTc in the Tc/Mo solution. These inconsistencies were attributed to the impurities in the solution or improper column packing. During the irradiations, the performance of the optic transitional radiation (OTR) and infrared cameras was tested in high radiation field. The OTR cameras survivedmore » all irradiations, while the IR cameras failed every time. The addition of X-ray and neutron shielding improved camera survivability and decreased the number of upsets.« less

  12. Improved Gaussian Beam-Scattering Algorithm

    NASA Technical Reports Server (NTRS)

    Lock, James A.

    1995-01-01

    The localized model of the beam-shape coefficients for Gaussian beam-scattering theory by a spherical particle provides a great simplification in the numerical implementation of the theory. We derive an alternative form for the localized coefficients that is more convenient for computer computations and that provides physical insight into the details of the scattering process. We construct a FORTRAN program for Gaussian beam scattering with the localized model and compare its computer run time on a personal computer with that of a traditional Mie scattering program and with three other published methods for computing Gaussian beam scattering. We show that the analytical form of the beam-shape coefficients makes evident the fact that the excitation rate of morphology-dependent resonances is greatly enhanced for far off-axis incidence of the Gaussian beam.

  13. Flow Asymmetric Propargylation: Development of Continuous Processes for the Preparation of a Chiral β-Amino Alcohol.

    PubMed

    Li, Hui; Sheeran, Jillian W; Clausen, Andrew M; Fang, Yuan-Qing; Bio, Matthew M; Bader, Scott

    2017-08-01

    The development of a flow chemistry process for asymmetric propargylation using allene gas as a reagent is reported. The connected continuous process of allene dissolution, lithiation, Li-Zn transmetallation, and asymmetric propargylation provides homopropargyl β-amino alcohol 1 with high regio- and diastereoselectivity in high yield. This flow process enables practical use of an unstable allenyllithium intermediate. The process uses the commercially available and recyclable (1S,2R)-N-pyrrolidinyl norephedrine as a ligand to promote the highly diastereoselective (32:1) propargylation. Judicious selection of mixers based on the chemistry requirement and real-time monitoring of the process using process analytical technology (PAT) enabled stable and scalable flow chemistry runs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Separation and simultaneous quantitation of PGF2α and its epimer 8-iso-PGF2α using modifier-assisted differential mobility spectrometry tandem mass spectrometry.

    PubMed

    Liang, Chunsu; Sun, Hui; Meng, Xiangjun; Yin, Lei; Fawcett, J Paul; Yu, Huaidong; Liu, Ting; Gu, Jingkai

    2018-03-01

    Because many therapeutic agents are contaminated by epimeric impurities or form epimers as a result of metabolism, analytical tools capable of determining epimers are increasingly in demand. This article is a proof-of-principle report of a novel DMS-MS/MS method to separate and simultaneously quantify epimers, taking PGF2 α and its 8-epimer, 8- iso -PGF2 α , as an example. Good accuracy and precision were achieved in the range of 10-500 ng/mL with a run time of only 1.5 min. Isopropanol as organic modifier facilitated a good combination of sensitivity and separation. The method is the first example of the quantitation of epimers without chromatographic separation.

  15. Monte Carlo Solution to Find Input Parameters in Systems Design Problems

    NASA Astrophysics Data System (ADS)

    Arsham, Hossein

    2013-06-01

    Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.

  16. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  17. Methods Developed by the Tools for Engine Diagnostics Task to Monitor and Predict Rotor Damage in Real Time

    NASA Technical Reports Server (NTRS)

    Baaklini, George Y.; Smith, Kevin; Raulerson, David; Gyekenyesi, Andrew L.; Sawicki, Jerzy T.; Brasche, Lisa

    2003-01-01

    Tools for Engine Diagnostics is a major task in the Propulsion System Health Management area of the Single Aircraft Accident Prevention project under NASA s Aviation Safety Program. The major goal of the Aviation Safety Program is to reduce fatal aircraft accidents by 80 percent within 10 years and by 90 percent within 25 years. The goal of the Propulsion System Health Management area is to eliminate propulsion system malfunctions as a primary or contributing factor to the cause of aircraft accidents. The purpose of Tools for Engine Diagnostics, a 2-yr-old task, is to establish and improve tools for engine diagnostics and prognostics that measure the deformation and damage of rotating engine components at the ground level and that perform intermittent or continuous monitoring on the engine wing. In this work, nondestructive-evaluation- (NDE-) based technology is combined with model-dependent disk spin experimental simulation systems, like finite element modeling (FEM) and modal norms, to monitor and predict rotor damage in real time. Fracture mechanics time-dependent fatigue crack growth and damage-mechanics-based life estimation are being developed, and their potential use investigated. In addition, wireless eddy current and advanced acoustics are being developed for on-wing and just-in-time NDE engine inspection to provide deeper access and higher sensitivity to extend on-wing capabilities and improve inspection readiness. In the long run, these methods could establish a base for prognostic sensing while an engine is running, without any overt actions, like inspections. This damage-detection strategy includes experimentally acquired vibration-, eddy-current- and capacitance-based displacement measurements and analytically computed FEM-, modal norms-, and conventional rotordynamics-based models of well-defined damages and critical mass imbalances in rotating disks and rotors.

  18. Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.

    PubMed

    Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M

    2015-05-01

    Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.

  19. Running of the spectral index in deformed matter bounce scenarios with Hubble-rate-dependent dark energy

    NASA Astrophysics Data System (ADS)

    Arab, M.; Khodam-Mohammadi, A.

    2018-03-01

    As a deformed matter bounce scenario with a dark energy component, we propose a deformed one with running vacuum model (RVM) in which the dark energy density ρ _{Λ } is written as a power series of H^2 and \\dot{H} with a constant equation of state parameter, same as the cosmological constant, w=-1. Our results in analytical and numerical point of views show that in some cases same as Λ CDM bounce scenario, although the spectral index may achieve a good consistency with observations, a positive value of running of spectral index (α _s) is obtained which is not compatible with inflationary paradigm where it predicts a small negative value for α _s. However, by extending the power series up to H^4, ρ _{Λ }=n_0+n_2 H^2+n_4 H^4, and estimating a set of consistent parameters, we obtain the spectral index n_s, a small negative value of running α _s and tensor to scalar ratio r, which these reveal a degeneracy between deformed matter bounce scenario with RVM-DE and inflationary cosmology.

  20. Development of advanced Czochralski growth process to produce low cost 150 kg silicon ingots from a single crucible for technology readiness

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The process development continued, with a total of nine crystal growth runs. One of these was a 150 kg run of 5 crystals of approximately 30 kg each. Several machine and process problems were corrected and the 150 kg run was as successful as previous long runs on CG2000 RC's. The accelerated recharge and growth will be attempted when the development program resumes at full capacity in FY '82. The automation controls (Automatic Grower Light Computer System) were integrated to the seed dip temperature, shoulder, and diameter sensors on the CG2000 RC development grower. Test growths included four crystals, which were grown by the computer/sensor system from seed dip through tail off. This system will be integrated on the Mod CG2000 grower during the next quarter. The analytical task included the completion and preliminary testing of the gas chromatograph portion of the Furnace Atmosphere Analysis System. The system can detect CO concentrations and will be expanded to oxygen and water analysis in FY '82.

  1. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    NASA Astrophysics Data System (ADS)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  2. Benchmarking distributed data warehouse solutions for storing genomic variant information

    PubMed Central

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442

  3. 16 CFR 803.10 - Running of time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  4. Analytic Results for e+e- --> tt-bar and gammagamma --> tt-bar Observables near the Threshold up to the Next-to-Next-to-Leading Order of NRQCD

    NASA Astrophysics Data System (ADS)

    Penin, A. A.; Pivovarov, A. A.

    2001-02-01

    We present an analytical description of top-antitop pair production near the threshold in $e^+e^-$ annihilation and $\\g\\g$ collisions. A set of basic observables considered includes the total cross sections, forward-backward asymmetry and top quark polarization. The threshold effects relevant for the basic observables are described by three universal functions related to S wave production, P wave production and S-P interference. These functions are computed analytically up to the next-to-next-to-leading order of NRQCD. The total $e^+e^-\\to t\\bar t$ cross section near the threshold is obtained in the next-to-next-to-leading order in the closed form including the contribution due to the axial coupling of top quark and mediated by the Z-boson. The effects of the running of the strong coupling constant and of the finite top quark width are taken into account analytically for the P wave production and S-P wave interference.

  5. Spatializing Environmental Education: Critical Territorial Consciousness and Radical Place-Making in Public Schooling

    ERIC Educational Resources Information Center

    Stahelin, Nicolas

    2017-01-01

    In this case study of an environmental education (EE) program run in public schools of Rio de Janeiro, I use a constructivist spatial analytic to interrogate notions of space, place, and territory in critical EE practices. I examine the connections between socioenvironmental relations, counter-hegemonic political activity, and education by delving…

  6. Preparing Tutors to Hit the Ground Running: Lessons from New Tutors' Experiences

    ERIC Educational Resources Information Center

    Calma, Angelito

    2013-01-01

    Tutor development is an essential part of academic staff development, yet is comparatively under-researched. This article examines what tutors value as most and least important in a program. Using data from more than 300 participants in three years, and using the dimensions or worth, merit and success as an analytical framework, the article…

  7. Production of Computer Animated Movies for Educational Purposes.

    ERIC Educational Resources Information Center

    Elberg, H. H.

    A detailed account is given in this paper of the procedures and the equipment used in producing six computer-animated instructional movies. First, the sequence of events were described in a script, which, together with the analytical expressions that were dealt with, formed the basis of a program. Then, the program was run on a computer and the…

  8. The effect on reliability and sensitivity to level of training of combining analytic and holistic rating scales for assessing communication skills in an internal medicine resident OSCE.

    PubMed

    Daniels, Vijay John; Harley, Dwight

    2017-07-01

    Although previous research has compared checklists to rating scales for assessing communication, the purpose of this study was to compare the effect on reliability and sensitivity to level of training of an analytic, a holistic, and a combined analytic-holistic rating scale in assessing communication skills. The University of Alberta Internal Medicine Residency runs OSCEs for postgraduate year (PGY) 1 and 2 residents and another for PGY-4 residents. Communication stations were scored with an analytic scale (empathy, non-verbal skills, verbal skills, and coherence subscales) and a holistic scale. Authors analyzed reliability of individual and combined scales using generalizability theory and evaluated each scale's sensitivity to level of training. For analytic, holistic, and combined scales, 12, 12, and 11 stations respectively yielded a Phi of 0.8 for the PGY-1,2 cohort, and 16, 16, and 14 stations yielded a Phi of 0.8 for the PGY-4 cohort. PGY-4 residents scored higher on the combined scale, the analytic rating scale, and the non-verbal and coherence subscales. A combined analytic-holistic rating scale increased score reliability and was sensitive to level of training. Given increased validity evidence, OSCE developers should consider combining analytic and holistic scales when assessing communication skills. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Altered Running Economy Directly Translates to Altered Distance-Running Performance.

    PubMed

    Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger

    2016-11-01

    Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.

  10. Method development for the determination of coumarin compounds by capillary electrophoresis with indirect laser-induced fluorescence detection.

    PubMed

    Wang, Weiping; Tang, Jianghong; Wang, Shumin; Zhou, Lei; Hu, Zhide

    2007-04-27

    A capillary zone electrophoresis (CZE) with indirect laser-induced fluorescence detection (ILIFD) method is described for the simultaneous determination of esculin, esculetin, isofraxidin, genistein, naringin and sophoricoside. The baseline separation was achieved within 5 min with running buffer (pH 9.4) composed of 5mM borate, 20% methanol (v/v) as organic modifier, 10(-7)M fluorescein sodium as background fluorophore and 20 kV of applied voltage at 30 degrees C of cartridge temperature. Good linearity relationships (correlation coefficients >0.9900) between the second-order derivative peak-heights (RFU) and concentrations of the analytes (mol L(-1)) were obtained. The detection limits for all analytes in second-order derivative electrophoregrams were in the range of 3.8-15 microM. The RSD data of intra-day for migration times and second-order derivative peak-height were less than 0.95 and 5.02%, respectively. This developed method was applied to the analysis of the courmin compounds in herb plants with recoveries in the range of 94.7-102.1%. In this work, although the detection sensitivity was lower than that of direct LIF, yet the method would extend the application range of LIF detection.

  11. Comparing the results of an analytical model of the no-vent fill process with no-vent fill test results for a 4.96 cubic meters (175 cubic feet) tank

    NASA Technical Reports Server (NTRS)

    Taylor, William J.; Chato, David J.

    1993-01-01

    The NASA Lewis Research Center (NASA/LeRC) have been investigating a no-vent fill method for refilling cryogenic storage tanks in low gravity. Analytical modeling based on analyzing the heat transfer of a droplet has successfully represented the process in 0.034 m and 0.142 cubic m commercial dewars using liquid nitrogen and hydrogen. Recently a large tank (4.96 cubic m) was tested with hydrogen. This lightweight tank is representative of spacecraft construction. This paper presents efforts to model the large tank test data. The droplet heat transfer model is found to over predict the tank pressure level when compared to the large tank data. A new model based on equilibrium thermodynamics has been formulated. This new model is compared to the published large scale tank's test results as well as some additional test runs with the same equipment. The results are shown to match the test results within the measurement uncertainty of the test data except for the initial transient wall cooldown where it is conservative (i.e., overpredicts the initial pressure spike found in this time frame).

  12. Plasma creatinine and creatine quantification by capillary electrophoresis diode array detector.

    PubMed

    Zinellu, Angelo; Caria, Marcello A; Tavera, Claudio; Sotgia, Salvatore; Chessa, Roberto; Deiana, Luca; Carru, Ciriaco

    2005-07-15

    Traditional clinical assays for nonprotein nitrogen compounds, such as creatine and creatinine, have focused on the use of enzymes or chemical reactions that allow measurement of each analyte separately. Most of these assays are mainly directed to urine quantification, so that their applicability on plasma samples is frequently hard to perform. This work describes a simple free zone capillary electrophoresis method for the simultaneous measurement of creatinine and creatine in human plasma. The effect of analytical parameters such as concentration and pH of Tris-phosphate running buffer and cartridge temperature on resolution, migration times, peak areas, and efficiency was investigated. Good separation was achieved using a 60.2-cm x 75-microm uncoated silica capillary, 75 mmol/L Tris-phosphate buffer, pH 2.25, at 15 degrees C, in less than 8 min. We compared the present method to a validated capillary electrophoresis assay, by measuring plasma creatinine in 120 normal subjects. The obtained data were compared by the Passing-Bablok regression and the Bland-Altman test. Moreover the performance of the developed method was assessed by measuring creatine and creatinine in 16 volunteers prior to and after a moderate physical exercise.

  13. Automatically measuring brain ventricular volume within PACS using artificial intelligence.

    PubMed

    Yepes-Calderon, Fernando; Nelson, Marvin D; McComb, J Gordon

    2018-01-01

    The picture archiving and communications system (PACS) is currently the standard platform to manage medical images but lacks analytical capabilities. Staying within PACS, the authors have developed an automatic method to retrieve the medical data and access it at a voxel level, decrypted and uncompressed that allows analytical capabilities while not perturbing the system's daily operation. Additionally, the strategy is secure and vendor independent. Cerebral ventricular volume is important for the diagnosis and treatment of many neurological disorders. A significant change in ventricular volume is readily recognized, but subtle changes, especially over longer periods of time, may be difficult to discern. Clinical imaging protocols and parameters are often varied making it difficult to use a general solution with standard segmentation techniques. Presented is a segmentation strategy based on an algorithm that uses four features extracted from the medical images to create a statistical estimator capable of determining ventricular volume. When compared with manual segmentations, the correlation was 94% and holds promise for even better accuracy by incorporating the unlimited data available. The volume of any segmentable structure can be accurately determined utilizing the machine learning strategy presented and runs fully automatically within the PACS.

  14. Dynamic analysis and numerical experiments for balancing of the continuous single-disc and single-span rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi

    2017-03-01

    Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.

  15. Nearshore Tsunami Inundation Model Validation: Toward Sediment Transport Applications

    USGS Publications Warehouse

    Apotsos, Alex; Buckley, Mark; Gelfenbaum, Guy; Jaffe, Bruce; Vatvani, Deepak

    2011-01-01

    Model predictions from a numerical model, Delft3D, based on the nonlinear shallow water equations are compared with analytical results and laboratory observations from seven tsunami-like benchmark experiments, and with field observations from the 26 December 2004 Indian Ocean tsunami. The model accurately predicts the magnitude and timing of the measured water levels and flow velocities, as well as the magnitude of the maximum inundation distance and run-up, for both breaking and non-breaking waves. The shock-capturing numerical scheme employed describes well the total decrease in wave height due to breaking, but does not reproduce the observed shoaling near the break point. The maximum water levels observed onshore near Kuala Meurisi, Sumatra, following the 26 December 2004 tsunami are well predicted given the uncertainty in the model setup. The good agreement between the model predictions and the analytical results and observations demonstrates that the numerical solution and wetting and drying methods employed are appropriate for modeling tsunami inundation for breaking and non-breaking long waves. Extension of the model to include sediment transport may be appropriate for long, non-breaking tsunami waves. Using available sediment transport formulations, the sediment deposit thickness at Kuala Meurisi is predicted generally within a factor of 2.

  16. CZE separation of strawberry anthocyanins with acidic buffer and comparison with HPLC.

    PubMed

    Comandini, Patrizia; Blanda, Giampaolo; Cardinali, Andrea; Cerretani, Lorenzo; Bendini, Alessandra; Caboni, Maria Fiorenza

    2008-10-01

    Anthocyanins, the major colourants of strawberries, are polar pigments that are positively charged at low pH. Herein, we have assessed a new analytical method for the separation of anthocyanins using CZE. Acidic buffer solutions (pH <2) were employed in order to maintain pigments in the cation flavylium form and achieve high molar absorptivity at 510 nm. These spectral properties enabled us to identify strawberry anthocyanins in a preliminary stage by detection in the visible range, although the method was optimised at 280 nm to obtain the best S/N. The effects of buffer composition highlighted the necessity of adding an organic modifier to the running buffer to obtain a suitable separation. The electrophoretic method permitted the separation of the three main anthocyanins of strawberry extracts, namely pelargonidin 3-glucoside (Pg-glu), pelargonidin 3-rutinoside and cyanidin 3-glucoside. The electrophoretic results, expressed as retention time and separation efficiency of the major anthocyanin (Pg-glu), were compared to those achieved in HPLC, the analytical technique traditionally used for the investigation of anthocyanins in vegetable matrix. The content of Pg-glu in strawberries (cv. Camarosa), calculated with HPCE and HPLC methods, resulted respectively in 11.41 mg/L and 11.37 mg/L.

  17. High-performance liquid chromatography determination of red wine tannin stickiness.

    PubMed

    Revelette, Matthew R; Barak, Jennifer A; Kennedy, James A

    2014-07-16

    Red wine astringency is generally considered to be the sensory result of salivary protein precipitation following tannin-salivary protein interaction and/or tannin adhering to the oral mucosa. Astringency in red wine is often described using qualitative terms, such as hard and soft. Differences in qualitative description are thought to be due in part to the tannin structure. Tannin chemistry contributions to qualitative description have been shown to correlate with the enthalpy of interaction between tannin and a hydrophobic surface. On the basis of these findings, a method was developed that enabled the routine determination of the thermodynamics of the tannin interaction with a hydrophobic surface (polystyrene divinylbenzene) for tannins in red wine following direct injection. The optimized analytical method monitored elution at four different column temperatures (25-40 °C, in 5 °C increments), had a 20 min run time, and was monitored at 280 nm. The results of this study confirm that the calculated thermodynamics of the interaction are intensive and, therefore, provide specific thermodynamic information. Variation in the enthalpy of interaction between tannin and a hydrophobic surface (tannin stickiness) is a unique, concentration-independent analytical parameter. The method, in addition to providing information on tannin stickiness, provides the tannin concentration.

  18. Development and validation of a high throughput assay for the quantification of multiple green tea-derived catechins in human plasma.

    PubMed

    Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B

    2018-06-19

    A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.

  19. Rapid analysis of charge variants of monoclonal antibodies using non-linear salt gradient in cation-exchange high performance liquid chromatography.

    PubMed

    Joshi, Varsha; Kumar, Vijesh; Rathore, Anurag S

    2015-08-07

    A method is proposed for rapid development of a short, analytical cation exchange high performance liquid chromatography method for analysis of charge heterogeneity in monoclonal antibody products. The parameters investigated and optimized include pH, shape of elution gradient and length of the column. It is found that the most important parameter for development of a shorter method is the choice of the shape of elution gradient. In this paper, we propose a step by step approach to develop a non-linear sigmoidal shape gradient for analysis of charge heterogeneity for two different monoclonal antibody products. The use of this gradient not only decreases the run time of the method to 4min against the conventional method that takes more than 40min but also the resolution is retained. Superiority of the phosphate gradient over sodium chloride gradient for elution of mAbs is also observed. The method has been successfully evaluated for specificity, sensitivity, linearity, limit of detection, and limit of quantification. Application of this method as a potential at-line process analytical technology tool has been suggested. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Separation and detection of amino acid metabolites of Escherichia coli in microbial fuel cell with CE.

    PubMed

    Wang, Wei; Ma, Lihong; Lin, Ping; Xu, Kaixuan

    2016-07-01

    In this work, CE-LIF was employed to investigate the amino acid metabolites produced by Escherichia coli (E. coli) in microbial fuel cell (MFC). Two peptides, l-carnosine and l-alanyl-glycine, together with six amino acids, cystine, alanine, lysine, methionine, tyrosine, arginine were separated and detected in advance by a CE-LIF system coupled with a homemade spontaneous injection device. The injection device was devised to alleviate the effect of electrical discrimination for analytes during sample injection. All analytes could be completely separated within 8 min with detection limits of 20-300 nmol/L. Then this method was applied to analyze the substrate solution containing amino acid metabolites produced by E. coli. l-carnosine, l-alanyl-glycine, and cystine were used as the carbon, nitrogen, and sulfur source for the E. coli culture in the MFC to investigate the amino acid metabolites during metabolism. Two MFCs were used to compare the activity of metabolism of the bacteria. In the sample collected at the running time 200 h of MFC, the amino acid methionine was discovered as the metabolite with the concentrations 23.3 μg/L. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Study of tethered satellite active attitude control

    NASA Technical Reports Server (NTRS)

    Colombo, G.

    1982-01-01

    Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.

  2. Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser-induced fluorescence detection.

    PubMed

    Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R; Seliskar, Carl J; Limbach, Patrick A; Heineman, William R

    2010-08-01

    Parallel separations using CE on a multilane microchip with multiplexed LIF detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be determined in parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pK(a) determination of small molecule analytes is demonstrated with the multilane microchip.

  3. Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection

    PubMed Central

    Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R.; Seliskar, Carl J.; Limbach, Patrick A.; Heineman, William R.

    2010-01-01

    Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be analyzed on parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pKa determination of small molecule analytes is demonstrated with the multilane microchip. PMID:20737446

  4. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    PubMed

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  5. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    PubMed

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  6. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  7. Using Queuing Theory and Simulation Modelling to Reduce Waiting Times in An Iranian Emergency Department

    PubMed Central

    Haghighinejad, Hourvash Akbari; Kharazmi, Erfan; Hatam, Nahid; Yousefi, Sedigheh; Hesami, Seyed Ali; Danaei, Mina; Askarian, Mehrdad

    2016-01-01

    Background: Hospital emergencies have an essential role in health care systems. In the last decade, developed countries have paid great attention to overcrowding crisis in emergency departments. Simulation analysis of complex models for which conditions will change over time is much more effective than analytical solutions and emergency department (ED) is one of the most complex models for analysis. This study aimed to determine the number of patients who are waiting and waiting time in emergency department services in an Iranian hospital ED and to propose scenarios to reduce its queue and waiting time. Methods: This is a cross-sectional study in which simulation software (Arena, version 14) was used. The input information was extracted from the hospital database as well as through sampling. The objective was to evaluate the response variables of waiting time, number waiting and utilization of each server and test the three scenarios to improve them. Results: Running the models for 30 days revealed that a total of 4088 patients left the ED after being served and 1238 patients waited in the queue for admission in the ED bed area at end of the run (actually these patients received services out of their defined capacity). The first scenario result in the number of beds had to be increased from 81 to179 in order that the number waiting of the “bed area” server become almost zero. The second scenario which attempted to limit hospitalization time in the ED bed area to the third quartile of the serving time distribution could decrease the number waiting to 586 patients. Conclusion: Doubling the bed capacity in the emergency department and consequently other resources and capacity appropriately can solve the problem. This includes bed capacity requirement for both critically ill and less critically ill patients. Classification of ED internal sections based on severity of illness instead of medical specialty is another solution. PMID:26793727

  8. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  9. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  10. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  11. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  12. Effect of ionization suppression by trace impurities in mobile phase water on the accuracy of quantification by high-performance liquid chromatography/mass spectrometry.

    PubMed

    Herath, H M D R; Shaw, P N; Cabot, P; Hewavitharana, A K

    2010-06-15

    The high-performance liquid chromatography (HPLC) column is capable of enrichment/pre-concentration of trace impurities in the mobile phase during the column equilibration, prior to sample injection and elution. These impurities elute during gradient elution and result in significant chromatographic peaks. Three types of purified water were tested for their impurity levels, and hence their performances as mobile phase, in HPLC followed by total ion current (TIC) mode of MS. Two types of HPLC-grade water produced 3-4 significant peaks in solvent blanks while LC/MS-grade water produced no peaks (although peaks were produced by LC/MS-grade water also after a few days of standing). None of the three waters produced peaks in HPLC followed by UV-Vis detection. These peaks, if co-eluted with analyte, are capable of suppressing or enhancing the analyte signal in a MS detector. As it is not common practice to run solvent blanks in TIC mode, when quantification is commonly carried out using single ion monitoring (SIM) or single or multiple reaction monitoring (SRM or MRM), the effect of co-eluting impurities on the analyte signal and hence on the accuracy of the results is often unknown to the analyst. Running solvent blanks in TIC mode, regardless of the MS mode used for quantification, is essential in order to detect this problem and to take subsequent precautions. Copyright (c) 2010 John Wiley & Sons, Ltd.

  13. Ultrasensitive analysis of lysergic acid diethylamide and its C-8 isomer in hair by capillary zone electrophoresis in combination with a stacking technique and laser induced fluorescence detection.

    PubMed

    Airado-Rodríguez, Diego; Cruces-Blanco, Carmen; García-Campaña, Ana M

    2015-03-25

    This article deals with the development and validation of a novel capillary zone electrophoresis (CZE) with laser induced fluorescence detection method for the analysis of lysergic acid diethylamide (LSD) and its isomer iso-LSD in hair samples. The separation of both analytes has been achieved in less than 13 min in a 72-cm effective length capillary with 75-μm internal diameter. As running buffer 25 mM citrate, pH 6.0 has been employed and separation temperature and voltage of 20 °C and 13 kV respectively, were applied. Field amplified sample injection (FASI) has been employed for on-line sample preconcentration, using ultrapure water containing 117 μM H3PO4 as optimum injection medium. Injection voltage and time have been optimized by means of experimental design, obtaining values of 7 kV and 15s, respectively. Methylergonovine has been employed as internal standard in order to compensate irreproducibility from electrokinetic injection. The analytical method has been applied to hair samples, previous extraction of the target analytes by ultrasound assisted solid-liquid extraction at 40 °C for 2.5 h, employing acetonitrile as extracting solvent. Linear responses were found for LSD and iso-LSD in matrix-matched calibrations from around 0.400 up to 50.0 pg mg(-1). LODs (3 S/N) in the order of 0.100 pg mg(-1) were calculated for both analytes, obtaining satisfactory recovery percentages for this kind of sample. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Proteomic analysis of serum and sputum analytes distinguishes controlled and poorly controlled asthmatics.

    PubMed

    Kasaian, M T; Lee, J; Brennan, A; Danto, S I; Black, K E; Fitz, L; Dixon, A E

    2018-04-17

    A major goal of asthma therapy is to achieve disease control, with maintenance of lung function, reduced need for rescue medication, and prevention of exacerbation. Despite current standard of care, up to 70% of patients with asthma remain poorly controlled. Analysis of serum and sputum biomarkers could offer insights into parameters associated with poor asthma control. To identify signatures as determinants of asthma disease control, we performed proteomics using Olink proximity extension analysis. Up to 3 longitudinal serum samples were collected from 23 controlled and 25 poorly controlled asthmatics. Nine of the controlled and 8 of the poorly controlled subjects also provided 2 longitudinal sputum samples. The study included an additional cohort of 9 subjects whose serum was collected within 48 hours of asthma exacerbation. Two separate pre-defined Proseek Multiplex panels (INF and CVDIII) were run to quantify 181 separate protein analytes in serum and sputum. Panels consisting of 9 markers in serum (CCL19, CCL25, CDCP1, CCL11, FGF21, FGF23, Flt3L, IL-10Rβ, IL-6) and 16 markers in sputum (tPA, KLK6, RETN, ADA, MMP9, Chit1, GRN, PGLYRP1, MPO, HGF, PRTN3, DNER, PI3, Chi3L1, AZU1, and OPG) distinguished controlled and poorly controlled asthmatics. The sputum analytes were consistent with a pattern of neutrophil activation associated with poor asthma control. The serum analyte profile of the exacerbation cohort resembled that of the controlled group rather than that of the poorly controlled asthmatics, possibly reflecting a therapeutic response to systemic corticosteroids. Proteomic profiles in serum and sputum distinguished controlled and poorly controlled asthmatics, and were maintained over time. Findings support a link between sputum neutrophil markers and loss of asthma control. © 2018 John Wiley & Sons Ltd.

  15. Determination of microbial phenolic acids in human faeces by UPLC-ESI-TQ MS.

    PubMed

    Sánchez-Patán, Fernando; Monagas, María; Moreno-Arribas, M Victoria; Bartolomé, Begoña

    2011-03-23

    The aim of the present work was to develop a reproducible, sensitive, and rapid UPLC-ESI-TQ MS analytical method for determination of microbial phenolic acids and other related compounds in faeces. A total of 47 phenolic compounds including hydroxyphenylpropionic, hydroxyphenylacetic, hydroxycinnamic, hydroxybenzoic, and hydroxymandelic acids and simple phenols were considered. To prepare an optimum pool standard solution, analytes were classified in 5 different groups with different starting concentrations according to their MS response. The developed UPLC method allowed a high resolution of the pool standard solution within an 18 min injection run time. The LOD of phenolic compounds ranged from 0.001 to 0.107 μg/mL and LOQ from 0.003 to 0.233 μg/mL. The method precision met acceptance criteria (<15% RSD) for all analytes, and accuracy was >80%. The method was applied to faecal samples collected before and after the intake of a flavan-3-ol supplement by a healthy volunteer. Both external and internal calibration methods were considered for quantification purposes, using 4-hydroxybenzoic-2,3,4,5-d4 acid as internal standard. For most analytes and samples, the level of microbial phenolic acids did not differ by using one or another calibration method. The results revealed an increase in protocatechuic, syringic, benzoic, p-coumaric, phenylpropionic, 3-hydroxyphenylacetic, and 3-hydroxyphenylpropionic acids, although differences due to the intake were only significant for the latter compound. In conclusion, the UPLC-DAD-ESI-TQ MS method developed is suitable for targeted analysis of microbial-derived phenolic metabolites in faecal samples from human intervention or in vitro fermentation studies, which requires high sensitivity and throughput.

  16. Advantages of using tetrahydrofuran-water as mobile phases in the quantitation of cyclosporin A in monkey and rat plasma by liquid chromatography-tandem mass spectrometry.

    PubMed

    Li, Austin C; Li, Yinghe; Guirguis, Micheal S; Caldwell, Robert G; Shou, Wilson Z

    2007-01-04

    A new analytical method is described here for the quantitation of anti-inflammatory drug cyclosporin A (CyA) in monkey and rat plasma. The method used tetrahydrofuran (THF)-water mobile phases to elute the analyte and internal standard, cyclosporin C (CyC). The gradient mobile phase program successfully eluted CyA into a sharp peak and therefore improved resolution between the analyte and possible interfering materials compared with previously reported analytical approaches, where CyA was eluted as a broad peak due to the rapid conversion between different conformers. The sharp peak resulted from this method facilitated the quantitative calculation as multiple smoothing and large number of bunching factors were not necessary. The chromatography in the new method was performed at 30 degrees C instead of 65-70 degrees C as reported previously. Other advantages of the method included simple and fast sample extraction-protein precipitation, direct injection of the extraction supernatant to column for analysis, and elimination of evaporation and reconstitution steps, which were needed in solid phase extraction or liquid-liquid extraction reported before. This method is amenable to high-throughput analysis with a total chromatographic run time of 3 min. This approach has been verified as sensitive, linear (0.977-4000 ng/mL), accurate and precise for the quantitation of CyA in monkey and rat plasma. However, compared with the usage of conventional mobile phases, the only drawback of this approach was the reduced detection response from the mass spectrometer that was possibly caused by poor desolvation in the ionization source. This is the first report to demonstrate the advantages of using THF-water mobile phases to elute CyA in liquid chromatography.

  17. Effect of Minimalist Footwear on Running Efficiency

    PubMed Central

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  18. Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document

    NASA Technical Reports Server (NTRS)

    Taylor, B. N.; Loscutoff, A. V.

    1972-01-01

    Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.

  19. Turbulent Collapse of Gravitationally Bound Clouds

    NASA Astrophysics Data System (ADS)

    Murray, Daniel W.

    In this dissertation, I explore the time-variable rate of star formation, using both numerical and analytic techniques. I discuss the dynamics of collapsing regions, the effect of protostellar jets, and development of software for use in the hydrodynamic code RAMSES. I perform high-resolution adaptive mesh refinement simulations of star formation in self-gravitating turbulently driven gas. I have run simulations including hydrodynamics (HD), and HD with protostellar jet feedback. Accretion begins when the turbulent fluctuations on largescales, near the driving scale, produce a converging flow. I find that the character of the collapse changes at two radii, the disk radius rd, and the radius r* where the enclosed gas mass exceeds the stellar mass. This is the first numerical work to show that the density evolves to a fixed attractor, rho(r, t) → rho( r), for rd < r < r*; mass flows through this structure onto a sporadically gravitationally unstable disk, and from thence onto the star. The total stellar mass M*(t) (t - t*)2, where (t - t *)2 is the time elapsed since the formation of the first star. This is in agreement with previous numerical and analytic work that suggests a linear rate of star formation. I show that protostellar jets change the normalization of the stellar mass accretion rate, but do not strongly affect the dynamics of star formation in hydrodynamics runs. In particular, M*(t) infinity (1 - f jet)2(t - t*) 2 is the fraction of mass accreted onto the protostar, where fjet is the fraction ejected by the jet. For typical values of fjet 0.1 - 0.3 the accretion rate onto the star can be reduced by a factor of two or three. However, I find that jets have only a small effect (of order 25%) on the accretion rate onto the protostellar disk (the "raw" accretion rate). In other words, jets do not affect the dynamics of the infall, but rather simply eject mass before it reaches the star. Finally, I show that the small scale structure--the radial density, velocity, and mass accretion profiles--are very similar in the jet and no-jet cases.

  20. 5K Run: 7-Week Training Schedule for Beginners

    MedlinePlus

    ... This 5K training schedule incorporates a mix of running, walking and resting. This combination helps reduce the ... you'll gradually increase the amount of time running and reduce the amount of time walking. If ...

  1. Effects of a minimalist shoe on running economy and 5-km running performance.

    PubMed

    Fuller, Joel T; Thewlis, Dominic; Tsiros, Margarita D; Brown, Nicholas A T; Buckley, Jonathan D

    2016-09-01

    The purpose of this study was to determine if minimalist shoes improve time trial performance of trained distance runners and if changes in running economy, shoe mass, stride length, stride rate and footfall pattern were related to any difference in performance. Twenty-six trained runners performed three 6-min sub-maximal treadmill runs at 11, 13 and 15 km·h(-1) in minimalist and conventional shoes while running economy, stride length, stride rate and footfall pattern were assessed. They then performed a 5-km time trial. In the minimalist shoe, runners completed the trial in less time (effect size 0.20 ± 0.12), were more economical during sub-maximal running (effect size 0.33 ± 0.14) and decreased stride length (effect size 0.22 ± 0.10) and increased stride rate (effect size 0.22 ± 0.11). All but one runner ran with a rearfoot footfall in the minimalist shoe. Improvements in time trial performance were associated with improvements in running economy at 15 km·h(-1) (r = 0.58), with 79% of the improved economy accounted for by reduced shoe mass (P < 0.05). The results suggest that running in minimalist shoes improves running economy and 5-km running performance.

  2. Sex-related differences in the wheel-running activity of mice decline with increasing age.

    PubMed

    Bartling, Babett; Al-Robaiy, Samiya; Lehnich, Holger; Binder, Leonore; Hiebl, Bernhard; Simm, Andreas

    2017-01-01

    Laboratory mice of both sexes having free access to running wheels are commonly used to study mechanisms underlying the beneficial effects of physical exercise on health and aging in human. However, comparative wheel-running activity profiles of male and female mice for a long period of time in which increasing age plays an additional role are unknown. Therefore, we permanently recorded the wheel-running activity (i.e., total distance, median velocity, time of breaks) of female and male mice until 9months of age. Our records indicated higher wheel-running distances for females than males which were highest in 2-month-old mice. This was mainly reached by higher running velocities of the females and not by longer running times. However, the sex-related differences declined in parallel to the age-associated reduction in wheel-running activities. Female mice also showed more variances between the weekly running distances than males, which were recorded most often for females being 4-6months old but not older. Additional records of 24-month-old mice of both sexes indicated highly reduced wheel-running activities at old age. Surprisingly, this reduction at old age resulted mainly from lower running velocities and not from shorter running times. Old mice also differed in their course of night activity which peaked later compared to younger mice. In summary, we demonstrated the influence of sex on the age-dependent activity profile of mice which is somewhat contrasting to humans, and this has to be considered when transferring exercise-mediated mechanism from mouse to human. Copyright © 2016. Published by Elsevier Inc.

  3. Ketamine metabolites with antidepressant effects: Fast, economical, and eco-friendly enantioselective separation based on supercritical-fluid chromatography (SFC) and single quadrupole MS detection.

    PubMed

    Fassauer, Georg M; Hofstetter, Robert; Hasan, Mahmoud; Oswald, Stefan; Modeß, Christina; Siegmund, Werner; Link, Andreas

    2017-11-30

    Increasing evidence accumulates that metabolites of the dissociative anesthetic ketamine contribute considerably to the biological effects of this drug and could be developed as next generation antidepressants, especially for acute treatment of patients with therapy-refractory major depression. Analytical methods for the simultaneous determination of the plethora of hydroxylated, dehydrogenated and/or demethylated compounds formed after administration of ketamine hydrochloride are a prerequisite for future clinical investigations and a deeper understanding of the individual role of the isomers of these metabolites. In this study, we present development and validation of a method based on supercritical-fluid chromatography (SFC) coupled to single quadrupole MS detection that allows the separation of ketamine as well as all of its relevant metabolites detected in urine of healthy volunteers. Inherently to SFC methods, the run times of the novel protocol are four times shorter than in a comparable HPLC method, the use of organic solvents is reduced and we were able to demonstrate and validate the successful enantioselective separation and quantification of R- and S-ketamine, R- and S-norketamine, R- and S-dehydronorketamine and (2R,6R)- and (2S,6S)-hydroxynorketamine isomers differing in either constitution, stereochemistry, or both, in one run. The developed method may be useful in investigating the antidepressant efficacy of ketamine in clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Determination of artificial sweeteners by capillary electrophoresis with contactless conductivity detection optimized by hydrodynamic pumping.

    PubMed

    Stojkovic, Marko; Mai, Thanh Duc; Hauser, Peter C

    2013-07-17

    The common sweeteners aspartame, cyclamate, saccharin and acesulfame K were determined by capillary electrophoresis with contactless conductivity detection. In order to obtain the best compromise between separation efficiency and analysis time hydrodynamic pumping was imposed during the electrophoresis run employing a sequential injection manifold based on a syringe pump. Band broadening was avoided by using capillaries of a narrow 10 μm internal diameter. The analyses were carried out in an aqueous running buffer consisting of 150 mM 2-(cyclohexylamino)ethanesulfonic acid and 400 mM tris(hydroxymethyl)aminomethane at pH 9.1 in order to render all analytes in the fully deprotonated anionic form. The use of surface modification to eliminate or reverse the electroosmotic flow was not necessary due to the superimposed bulk flow. The use of hydrodynamic pumping allowed easy optimization, either for fast separations (80s) or low detection limits (6.5 μmol L(-1), 5.0 μmol L(-1), 4.0 μmol L(-1) and 3.8 μmol L(-1) for aspartame, cyclamate, saccharin and acesulfame K respectively, at a separation time of 190 s). The conditions for fast separations not only led to higher limits of detection but also to a narrower dynamic range. However, the settings can be changed readily between separations if needed. The four compounds were determined successfully in food samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  6. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  7. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    ERIC Educational Resources Information Center

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  8. The relationship between aerobic fitness and recovery from high-intensity exercise in infantry soldiers.

    PubMed

    Hoffman, J R

    1997-07-01

    The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.

  9. A rapid and sensitive LC-MS/MS assay for the determination of saxagliptin and its active metabolite 5-hydroxy saxagliptin in human plasma and its application to a pharmacokinetic study.

    PubMed

    Batta, N; Pilli, N R; Derangula, V R; Vurimindi, H B; Damaramadugu, R; Yejella, R P

    2015-03-01

    The authors proposed a simple, rapid and sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) assay method for the simultaneous determination of saxagliptin and its active metabolite 5-hydroxy saxagliptin in human plasma. The developed method was fully validated as per the US FDA guidelines. The method utilized stable labeled isotopes saxagliptin-15 N d2 (IS1) and 5-hydroxy saxagliptin-15 N-d2 (IS2) as internal standards for the quantification of saxagliptin and 5-hydroxy saxagliptin, respectively. Analytes and the internal standards were extracted from human plasma by a single step solid-phase extraction technique without drying, evaporation and reconstitution steps. The optimized mobile phase was composed of 0.1% acetic acid in 5 mM ammonium acetate and acetonitrile (30:70, v/v) and delivered at a flow rate of 0.85 mL/min. The method exhibits the linear calibration range of 0.05-100 ng/mL for both the analytes. The precision and accuracy results for both the analytes were well within the acceptance limits. The different stability experiments conducted in aqueous samples and in matrix samples are meeting the acceptance criteria. The chromatographic run time was set at 1.8 min; hence more than 400 samples can be analyzed in a single day. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Determination of hydroxytyrosol and tyrosol by liquid chromatography for the quality control of cosmetic products based on olive extracts.

    PubMed

    Miralles, Pablo; Chisvert, Alberto; Salvador, Amparo

    2015-01-01

    An analytical method for the simultaneous determination of hydroxytyrosol and tyrosol in different types of olive extract raw materials and cosmetic cream samples has been developed. The determination was performed by liquid chromatography with UV spectrophotometric detection. Different chromatographic parameters, such as mobile phase pH and composition, oven temperature and different sample preparation variables were studied. The best chromatographic separation was obtained under the following conditions: C18 column set at 35°C and isocratic elution of a mixture ethanol: 1% acetic acid solution at pH 5 (5:95, v/v) as mobile phase pumped at 1 mL min(-1). The detection wavelength was set at 280 nm and the total run time required for the chromatographic analysis was 10 min, except for cosmetic cream samples where 20 min runtime was required (including a cleaning step). The method was satisfactorily applied to 23 samples including solid, water-soluble and fat-soluble olive extracts and cosmetic cream samples containing hydroxytyrosol and tyrosol. Good recoveries (95-107%) and repeatability (1.1-3.6%) were obtained, besides of limits of detection values below the μg mL(-1) level. These good analytical features, as well as its environmentally-friendly characteristics, make the presented method suitable to carry out both the control of the whole manufacture process of raw materials containing the target analytes and the quality control of the finished cosmetic products. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. PCB congener analysis with Hall electrolytic conductivity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edstrom, R.D.

    1989-01-01

    This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less

  12. Development of a multi-matrix LC-MS/MS method for urea quantitation and its application in human respiratory disease studies.

    PubMed

    Wang, Jianshuang; Gao, Yang; Dorshorst, Drew W; Cai, Fang; Bremer, Meire; Milanowski, Dennis; Staton, Tracy L; Cape, Stephanie S; Dean, Brian; Ding, Xiao

    2017-01-30

    In human respiratory disease studies, liquid samples such as nasal secretion (NS), lung epithelial lining fluid (ELF), or upper airway mucosal lining fluid (MLF) are frequently collected, but their volumes often remain unknown. The lack of volume information makes it hard to estimate the actual concentration of recovered active pharmaceutical ingredient or biomarkers. Urea has been proposed to serve as a sample volume marker because it can freely diffuse through most body compartments and is less affected by disease states. Here, we report an easy and reliable LC-MS/MS method for cross-matrix measurement of urea in serum, plasma, universal transfer medium (UTM), synthetic absorptive matrix elution buffer 1 (SAMe1) and synthetic absorptive matrix elution buffer 2 (SAMe2) which are commonly sampled in human respiratory disease studies. The method uses two stable-isotope-labeled urea isotopologues, [ 15 N 2 ]-urea and [ 13 C, 15 N 2 ]-urea, as the surrogate analyte and the internal standard, respectively. This approach provides the best measurement consistency across different matrices. The analyte extraction was individually optimized in each matrix. Specifically in UTM, SAMe1 and SAMe2, the unique salting-out assisted liquid-liquid extraction (SALLE) not only dramatically reduces the matrix interferences but also improves the assay recovery. The use of an HILIC column largely increases the analyte retention. The typical run time is 3.6min which allows for high throughput analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Sewage-based epidemiology in monitoring the use of new psychoactive substances: Validation and application of an analytical method using LC-MS/MS.

    PubMed

    Kinyua, Juliet; Covaci, Adrian; Maho, Walid; McCall, Ann-Kathrin; Neels, Hugo; van Nuijs, Alexander L N

    2015-09-01

    Sewage-based epidemiology (SBE) employs the analysis of sewage to detect and quantify drug use within a community. While SBE has been applied repeatedly for the estimation of classical illicit drugs, only few studies investigated new psychoactive substances (NPS). These compounds mimic effects of illicit drugs by introducing slight modifications to chemical structures of controlled illicit drugs. We describe the optimization, validation, and application of an analytical method using liquid chromatography coupled to positive electrospray tandem mass spectrometry (LC-ESI-MS/MS) for the determination of seven NPS in sewage: methoxetamine (MXE), butylone, ethylone, methylone, methiopropamine (MPA), 4-methoxymethamphetamine (PMMA), and 4-methoxyamphetamine (PMA). Sample preparation was performed using solid-phase extraction (SPE) with Oasis MCX cartridges. The LC separation was done with a HILIC (150 x 3 mm, 5 µm) column which ensured good resolution of the analytes with a total run time of 19 min. The lower limit of quantification (LLOQ) was between 0.5 and 5 ng/L for all compounds. The method was validated by evaluating the following parameters: sensitivity, selectivity, linearity, accuracy, precision, recoveries and matrix effects. The method was applied on sewage samples collected from sewage treatment plants in Belgium and Switzerland in which all investigated compounds were detected, except MPA and PMA. Furthermore, a consistent presence of MXE has been observed in most of the sewage samples at levels higher than LLOQ. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Liquid chromatographic tandem mass spectrometric assay for quantification of 97/78 and its metabolite 97/63: a promising trioxane antimalarial in monkey plasma.

    PubMed

    Singh, R P; Sabarinath, S; Gautam, N; Gupta, R C; Singh, S K

    2009-07-15

    The present manuscript describes development and validation of LC-MS/MS assay for the simultaneous quantitation of 97/78 and its active in-vivo metabolite 97/63 in monkey plasma using alpha-arteether as internal standard (IS). The method involves a single step protein precipitation using acetonitrile as extraction method. The analytes were separated on a Columbus C(18) (50 mm x 2 mm i.d., 5 microm particle size) column by isocratic elution with acetonitrile:ammonium acetate buffer (pH 4, 10 mM) (80:20 v/v) at a flow rate of 0.45 mL/min, and analyzed by mass spectrometry in multiple reaction-monitoring (MRM) positive ion mode. The chromatographic run time was 4.0 min and the weighted (1/x(2)) calibration curves were linear over a range of 1.56-200 ng/mL. The method was linear for both the analytes with correlation coefficients >0.995. The intra-day and inter-day accuracy (% bias) and precisions (% RSD) of the assay were less than 6.27%. Both analytes were stable after three freeze-thaw cycles (% deviation <8.2) and also for 30 days in plasma (% deviation <6.7). The absolute recoveries of 97/78, 97/63 and internal standard (IS), from spiked plasma samples were >90%. The validated assay method, described here, was successfully applied to the pharmacokinetic study of 97/78 and its active in-vivo metabolite 97/63 in Rhesus monkeys.

  15. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  16. Target screening and confirmation of 35 licit and illicit drugs and metabolites in hair by LC-MSMS.

    PubMed

    Lendoiro, Elena; Quintela, Oscar; de Castro, Ana; Cruz, Angelines; López-Rivadulla, Manuel; Concheiro, Marta

    2012-04-10

    A liquid chromatography-tandem mass spectrometry (LC-MSMS) target screening in 50mg hair was developed and fully validated for 35 analytes (Δ9-tetrahidrocannabinol (THC), morphine, 6-acetylmorphine, codeine, methadone, fentanyl, amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine, 3,4-methylenedioxymethamphetamine, benzoylecgonine, cocaine, lysergic acid diethylamide, ketamine, scopolamine, alprazolam, bromazepam, clonazepam, diazepam, flunitrazepam, 7-aminoflunitrazepam, lorazepam, lormetazepam, nordiazepam, oxazepam, tetrazepam, triazolam, zolpidem, zopiclone, amitriptyline, citalopram, clomipramine, fluoxetine, paroxetine and venlafaxine). Hair decontamination was performed with dichloromethane, and incubation in 2 mL of acetonitrile at 50°C overnight. Extraction procedure was performed in 2 steps, first liquid-liquid extraction, hexane:ethyl acetate (55:45, v:v) at pH 9, followed by solid-phase extraction (Strata-X cartridges). Chromatographic separation was performed in AtlantisT3 (2.1 mm × 100 mm, 3 μm) column, acetonitrile and ammonium formate pH 3 as mobile phase, and 32 min total run time. One transition per analyte was monitored in MRM mode. To confirm a positive result, a second injection monitoring 2 transitions was performed. The method was specific (no endogenous interferences, n=9); LOD was 0.2-50 pg/mg and LOQ 0.5-100 pg/mg; linearity ranged from 0.5-100 to 2000-20,000 pg/mg; imprecision <15%; analytical recovery 85-115%; extraction efficiency 4.1-85.6%; and process efficiency 2.5-207.7%; 27 analytes showed ion suppression (up to -86.2%), 4 ion enhancement (up to 647.1%), and 4 no matrix effect; compounds showed good stability 24-48 h in autosampler. The method was applied to 17 forensic cases. In conclusion, a sensitive and specific target screening of 35 analytes in 50mg hair, including drugs of abuse (THC, cocaine, opiates, amphetamines) and medicines (benzodiazepines, antidepressants) was developed and validated, achieving lower cut-offs than Society of Hair Testing recommendations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  18. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  19. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  20. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  1. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...

  2. Rollout and Turnoff (ROTO) Guidance and Information Displays: Effect on Runway Occupancy Time in Simulated Low-Visibility Landings

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith

    2001-01-01

    This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.

    In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less

  4. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  5. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    PubMed

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  6. Adaptive real-time dual-comb spectroscopy.

    PubMed

    Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W

    2014-02-27

    The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences.

  7. Adaptive real-time dual-comb spectroscopy

    NASA Astrophysics Data System (ADS)

    Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W.

    2014-02-01

    The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences.

  8. Advanced ceramic coating development for industrial/utility gas turbine applications

    NASA Technical Reports Server (NTRS)

    Andersson, C. A.; Lau, S. K.; Bratton, R. J.; Lee, S. Y.; Rieke, K. L.; Allen, J.; Munson, K. E.

    1982-01-01

    The effects of ceramic coatings on the lifetimes of metal turbine components and on the performance of a utility turbine, as well as of the turbine operational cycle on the ceramic coatings were determined. When operating the turbine under conditions of constant cooling flow, the first row blades run 55K cooler, and as a result, have 10 times the creep rupture life, 10 times the low cycle fatigue life and twice the corrosion life with only slight decreases in both specific power and efficiency. When operating the turbine at constant metal temperature and reduced cooling flow, both specific power and efficiency increases, with no change in component lifetime. The most severe thermal transient of the turbine causes the coating bond stresses to approach 60% of the bond strengths. Ceramic coating failures was studied. Analytic models based on fracture mechanics theories, combined with measured properties quantitatively assessed both single and multiple thermal cycle failures which allowed the prediction of coating lifetime. Qualitative models for corrosion failures are also presented.

  9. Adaptive real-time dual-comb spectroscopy

    PubMed Central

    Ideguchi, Takuro; Poisson, Antonin; Guelachvili, Guy; Picqué, Nathalie; Hänsch, Theodor W.

    2014-01-01

    The spectrum of a laser frequency comb consists of several hundred thousand equally spaced lines over a broad spectral bandwidth. Such frequency combs have revolutionized optical frequency metrology and they now hold much promise for significant advances in a growing number of applications including molecular spectroscopy. Despite an intriguing potential for the measurement of molecular spectra spanning tens of nanometres within tens of microseconds at Doppler-limited resolution, the development of dual-comb spectroscopy is hindered by the demanding stability requirements of the laser combs. Here we overcome this difficulty and experimentally demonstrate a concept of real-time dual-comb spectroscopy, which compensates for laser instabilities by electronic signal processing. It only uses free-running mode-locked lasers without any phase-lock electronics. We record spectra spanning the full bandwidth of near-infrared fibre lasers with Doppler-limited line profiles highly suitable for measurements of concentrations or line intensities. Our new technique of adaptive dual-comb spectroscopy offers a powerful transdisciplinary instrument for analytical sciences. PMID:24572636

  10. Static Stretching Alters Neuromuscular Function and Pacing Strategy, but Not Performance during a 3-Km Running Time-Trial

    PubMed Central

    Damasceno, Mayara V.; Duarte, Marcos; Pasqua, Leonardo A.; Lima-Silva, Adriano E.; MacIntosh, Brian R.; Bertuzzi, Rômulo

    2014-01-01

    Purpose Previous studies report that static stretching (SS) impairs running economy. Assuming that pacing strategy relies on rate of energy use, this study aimed to determine whether SS would modify pacing strategy and performance in a 3-km running time-trial. Methods Eleven recreational distance runners performed a) a constant-speed running test without previous SS and a maximal incremental treadmill test; b) an anthropometric assessment and a constant-speed running test with previous SS; c) a 3-km time-trial familiarization on an outdoor 400-m track; d and e) two 3-km time-trials, one with SS (experimental situation) and another without (control situation) previous static stretching. The order of the sessions d and e were randomized in a counterbalanced fashion. Sit-and-reach and drop jump tests were performed before the 3-km running time-trial in the control situation and before and after stretching exercises in the SS. Running economy, stride parameters, and electromyographic activity (EMG) of vastus medialis (VM), biceps femoris (BF) and gastrocnemius medialis (GA) were measured during the constant-speed tests. Results The overall running time did not change with condition (SS 11:35±00:31 s; control 11:28±00:41 s, p = 0.304), but the first 100 m was completed at a significantly lower velocity after SS. Surprisingly, SS did not modify the running economy, but the iEMG for the BF (+22.6%, p = 0.031), stride duration (+2.1%, p = 0.053) and range of motion (+11.1%, p = 0.0001) were significantly modified. Drop jump height decreased following SS (−9.2%, p = 0.001). Conclusion Static stretch impaired neuromuscular function, resulting in a slow start during a 3-km running time-trial, thus demonstrating the fundamental role of the neuromuscular system in the self-selected speed during the initial phase of the race. PMID:24905918

  11. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  12. An alternative approach to the Army Physical Fitness Test two-mile run using critical velocity and isoperformance curves.

    PubMed

    Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R

    2012-02-01

    The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.

  13. Matrix-assisted laser desorption/ionization mass spectrometry for the evaluation of the C-terminal lysine distribution of a recombinant monoclonal antibody.

    PubMed

    Lazar, Alexandru C; Kloczewiak, Marek A; Mazsaroff, Istvan

    2004-01-01

    Recombinant monoclonal antibodies produced using mammalian cell lines contain multiple chemical modifications. One specific modification resides on the C-terminus of the heavy chain. Enzymes inside the cell can cleave the C-terminal lysine from the heavy-chain molecules, and variants with and without C-terminal lysine can be produced. In order to fully characterize the protein, there is a need for analytical methods that are able to account for the different product variants. Conventional analytical methods used for the measurement of the distribution of the two different variants are based on chemical or enzymatic degradation of the protein followed by chromatographic separation of the degradation products. Chromatographic separations with gradient elution have long run times, and analyses of multiple samples are time-consuming. This paper reports development of a novel method for the determination of the relative amounts of the two C-terminal heavy-chain variants based on matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) measurements of the cyanogen bromide degraded recombinant monoclonal antibody products. The distribution of the variants is determined from the MALDI-TOF mass spectra by measuring the peak areas of the two C-terminal peptides. The assay was used for the assessment of the C-terminal lysine distribution in different development lots. The method was able to differentiate between the products obtained using the same cell line as well as between products obtained from different cell lines. Copyright 2004 John Wiley & Sons, Ltd.

  14. Simultaneous quantification of naproxcinod and its active metabolite naproxen in rat plasma using LC-MS/MS: application to a pharmacokinetic study.

    PubMed

    Shi, Xiaowei; Shang, Weiding; Wang, Shuang; Xue, Na; Hao, Yanxia; Wang, Yabo; Sun, Mengmeng; Du, Yumin; Cao, Deying; Zhang, Kai; Shi, Qingwen

    2015-01-26

    In this study, a liquid chromatography-tandem mass spectrometry method was developed and validated to simultaneously determine naproxcinod and naproxen concentrations in rat plasma for the first time. Plasma samples were prepared by simple one-step extraction with methanol for protein precipitation using only 50 μL plasma. Separation was performed on a Synergi Fusion-RP C18 column with a run time of 4 min. Naproxcinod, naproxen and internal standard concentrations were detected in the positive ion mode using multiple reaction monitoring (MRM) of the transitions at m/z 348.2→302.2, 231.1→185.1 and 271.2→203.1, respectively. The calibration curves were linear, with all correlation coefficients being ≥0.9952, in the range of 1.00-400 ng/mL for naproxcinod and 20.0-8000 ng/mL for naproxen. Their accuracy was in the range of -8.1% to 8.7%, and the intra- and inter-day variations were ≤4.53%. The mean extraction recovery of all analytes was more than 93.1% efficient. Stability testing showed that naproxcinod and naproxen remained stable during the whole analytical procedure. After validation, the method was successfully applied to a pharmacokinetic study of naproxcinod and naproxen in rats. The AUC0-∞ of naproxen was 74.6 times larger than that of naproxcinod, which indicated that naproxcinod was rapidly metabolized into naproxen in rats. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Steady state preparative multiple dual mode counter-current chromatography: Productivity and selectivity. Theory and experimental verification.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A

    2015-08-07

    In the steady state (SS) multiple dual mode (MDM) counter-current chromatography (CCC), at the beginning of the first step of every cycle the sample dissolved in one of the phases is continuously fed into a CCC device over a constant time, not exceeding the run time of the first step. After a certain number of cycles, the steady state regime is achieved, where concentrations vary over time during each cycle, however, the concentration profiles of solutes eluted with both phases remain constant in all subsequent cycles. The objective of this work was to develop analytical expressions to describe the SS MDM CCC separation processes, which can be helpful to simulate and design these processes and select a suitable compromise between the productivity and the selectivity in the preparative and production CCC separations. Experiments carried out using model mixtures of compounds from the GUESSmix with solvent system hexane/ethyl acetate/methanol/water demonstrated a reasonable agreement between the predictions of the theory and the experimental results. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Advanced Software V&V for Civil Aviation and Autonomy

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume P.

    2017-01-01

    With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.

  17. Assessing the thermo-mechanical TaMeTirE model in offline vehicle simulation and driving simulator tests

    NASA Astrophysics Data System (ADS)

    Durand-Gasselin, Benoit; Dailliez, Thibault; Mössner-Beigel, Monika; Knorr, Stephanie; Rauh, Jochen

    2010-12-01

    This paper presents the experiences using Michelin's thermo-mechanical TaMeTirE tyre model for real-time handling applications in the field of advanced passenger car simulation. Passenger car handling simulations were performed using the tyre model in a full-vehicle real-time environment in order to assess TaMeTirE's level of consistency with real on-track handling behaviour. To achieve this goal, a first offline comparison with a state-of-the-art handling tyre model was carried out on three handling manoeuvres. Then, online real-time simulations of steering wheel steps and slaloms in straight line were run on Daimler's driving simulator by skilled and unskilled drivers. Two analytical tyre temperature effects and two inflation pressure effects were carried out in order to feel their impact on the handling behaviour of the vehicle. This paper underlines the realism of the handling simulation results performed with TaMeTirE, and shows the significant impact of a pressure or a temperature effect on the handling behaviour of a car.

  18. High-throughput analysis of bergamot essential oil by fast solid-phase microextraction-capillary gas chromatography-flame ionization detection.

    PubMed

    Tranchida, Peter Quinto; Presti, Maria Lo; Costa, Rosaria; Dugo, Paola; Dugo, Giovanni; Mondello, Luigi

    2006-01-20

    The advantages of using a narrow-bore column in headspace solid-phase microextraction-gas chromatographic (HS-SPME-GC) analysis are investigated. An automated rapid HS-SPME-GC method for the determination of volatile compounds in a complex sample (bergamot essential oil) was developed. A low-capacity (7 microm) SPME fibre was employed, enabling a short equilibration time (15 min). The absorbed volatile compounds were then separated in 12.5 min on a 10 m x 0.1 mm I.D. capillary. The fast GC method was characterized by relatively moderate GC parameters (head pressure: 173 kPa; temperature program rate: 12 degrees C/min). The employment of the low-capacity fibre also suited the reduced sample capacity of the capillary employed, hence column overloading was avoided. Analytical repeatibility was determined in terms of retention times (maximum RSD: 0.32%) and peak areas (maximum RSD: 9.80%). The results obtained were compared to those derived from a conventional HS-SPME-GC (a 30 microm SPME fibre and 0.25 mm I.D. capillary were used) application on the same sample. In this respect, a great reduction of analytical time was obtained both with regard to the conventional SPME equilibration and GC run times, which both required 50 min. Peak resolution was altogether comparable in both applications. Although a slight loss in terms of sensitivity was observed in the rapid approach (generally within the 25-50% range), this did not impair the detection of all peaks of interest. Finally, the selectivities of the 30 and 7 microm fibres were evaluated and, as expected, these were in good agreement.

  19. Development of Infants' Segmentation of Words from Native Speech: A Meta-Analytic Approach

    ERIC Educational Resources Information Center

    Bergmann, Christina; Cristia, Alejandrina

    2016-01-01

    Infants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their…

  20. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  1. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  2. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  3. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  4. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  5. Energy Efficiency of Induction Motors Running Off Frequency Converters with Pulse-Width Voltage Modulation{sup 1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shvetsov, N. K., E-mail: elmash@em.ispu.ru

    2016-11-15

    The results of calculations of the increase in losses in an induction motor with frequency control and different forms of the supply voltage are presented. The calculations were performed by an analytic method based on harmonic analysis of the supply voltage as well as numerical calculation of the electromagnetic processes by the finite-element method.

  6. Level 1 environmental assessment performance evaluation. Final report jun 77-oct 78

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, E.D.; Smith, F.; Wagoner, D.E.

    1979-02-01

    The report gives results of a two-phased evaluation of Level 1 environmental assessment procedures. Results from Phase I, a field evaluation of the Source Assessment Sampling System (SASS), showed that the SASS train performed well within the desired factor of 3 Level 1 accuracy limit. Three sample runs were made with two SASS trains sampling simultaneously and from approximately the same sampling point in a horizontal duct. A Method-5 train was used to estimate the 'true' particulate loading. The sampling systems were upstream of the control devices to ensure collection of sufficient material for comparison of total particulate, particle sizemore » distribution, organic classes, and trace elements. Phase II consisted of providing each of three organizations with three types of control samples to challenge the spectrum of Level 1 analytical procedures: an artificial sample in methylene chloride, an artificial sample on a flyash matrix, and a real sample composed of the combined XAD-2 resin extracts from all Phase I runs. Phase II results showed that when the Level 1 analytical procedures are carefully applied, data of acceptable accuracy is obtained. Estimates of intralaboratory and interlaboratory precision are made.« less

  7. Comparison of Sprint and Run Times with Performance on the Wingate Anaerobic Test.

    ERIC Educational Resources Information Center

    Tharp, Gerald D.; And Others

    1985-01-01

    Male volunteers were studied to examine the relationship between the Wingate Anaerobic Test (WAnT) and sprint-run times and to determine the influence of age and weight. Results indicate the WAnT is a moderate predictor of dash and run times but becomes a stronger predictor when adjusted for body weight. (Author/MT)

  8. 12 CFR 1102.306 - Procedures for requesting records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... section; (B) Where the running of such time is suspended for the calculation of a cost estimate for the... section; (C) Where the running of such time is suspended for the payment of fees pursuant to the paragraph... of the invoice. (ix) The time limit for the ASC to respond to a request will not begin to run until...

  9. Acute differences in foot strike and spatiotemporal variables for shod, barefoot or minimalist male runners.

    PubMed

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-05-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.

  10. Acute Differences in Foot Strike and Spatiotemporal Variables for Shod, Barefoot or Minimalist Male Runners

    PubMed Central

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-01-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480

  11. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    PubMed

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    2018-03-01

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  13. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  14. Utility perspective on USEPA analytical methods program redirection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, B.; Davis, M.K.; Krasner, S.W.

    1996-11-01

    The Metropolitan Water District of Southern California (Metropolitan) is a public, municipal corporation, created by the State of California, which wholesales supplemental water trough 27 member agencies (cities and water districts). Metropolitan serves nearly 16 million people in an area along the coastal plain of Southern California that covers approximately 5200 square miles. Water deliveries have averaged up to 2.5 million acre-feet per year. Metropolitan`s Water Quality Laboratory (WQL) conducts compliance monitoring of its source and finished drinking waters for chemical and microbial constituents. The laboratory maintains certification of a large number and variety of analytical procedures. The WQL operatesmore » in a 17,000-square-foot facility. The equipment is state-of-the-art analytical instrumentation. The staff consists of 40 professional chemists and microbiologists whose experience and expertise are extensive and often highly specialized. The staff turnover is very low, and the laboratory is consistently, efficiently, and expertly run.« less

  15. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  16. Improving Causal Inferences in Meta-analyses of Longitudinal Studies: Spanking as an Illustration.

    PubMed

    Larzelere, Robert E; Gunnoe, Marjorie Lindner; Ferguson, Christopher J

    2018-05-24

    To evaluate and improve the validity of causal inferences from meta-analyses of longitudinal studies, two adjustments for Time-1 outcome scores and a temporally backwards test are demonstrated. Causal inferences would be supported by robust results across both adjustment methods, distinct from results run backwards. A systematic strategy for evaluating potential confounds is also introduced. The methods are illustrated by assessing the impact of spanking on subsequent externalizing problems (child age: 18 months to 11 years). Significant results indicated a small risk or a small benefit of spanking, depending on the adjustment method. These meta-analytic methods are applicable for research on alternatives to spanking and other developmental science topics. The underlying principles can also improve causal inferences in individual studies. © 2018 Society for Research in Child Development.

  17. Simulating a High-Spin Black Hole-Neutron Star Binary

    NASA Astrophysics Data System (ADS)

    Derby, John; Lovelace, Geoffrey; Duez, Matt; Foucart, Francois; Simulating Extreme Spacetimes (SXS) Collaboration

    2017-01-01

    During their first observing run (fall 2015) Advanced LIGO detected gravitational waves from merging black holes. In its future observations LIGO could detect black hole neutron star binaries (BHNS). It is important to have numerical simulations to predict these waves, to help find as many of these waves as possible and to estimate the sources properties, because at times near merger analytic approximations fail. Also, numerical models of the disk formed when the black hole tears apart the neutron star can help us learn about these systems' potential electromagnetic counterparts. One area of the parameter space for BHNS systems that is particularly challenging is simulations with high black hole spin. I will present results from a new BHNS simulation that has a black hole spin of 90% of the theoretical maximum. We are part of SXS but not all.

  18. Investigating rhodamine B-labeled peptoids: scopes and limitations of its applications.

    PubMed

    Birtalan, Esther; Rudat, Birgit; Kölmel, Dominik K; Fritz, Daniel; Vollrath, Sidonie B L; Schepers, Ute; Bräse, Stefan

    2011-01-01

    The fluorophore rhodamine B is often used in biological assays. It is inexpensive, robust under a variety of reaction conditions, can be covalently linked to bioactive molecules, and has suitable spectral properties in terms of absorption and fluorescence wavelength. Nonetheless, there are some drawbacks: it can readily form a spirolactam compound, which is nonfluorescent, and therefore may not be the dye of choice for all fluorescence microscopy applications. Herein this spirolactam formation was observed by purifying such a labeled peptoid with high performance liquid chromatography (HPLC) and monitored in detail by making a series of analytical HPLC runs over time. Additionally, a small library of eight peptoids with rhodamine B as label was synthesized. Analysis of the absorption properties of these molecules demonstrated that the problem of fluorescence loss can be overcome by coupling secondary amines with rhodamine B.

  19. Stochastic Modelling of Wireless Energy Transfer

    NASA Technical Reports Server (NTRS)

    Veilleux, Shaun; Almaghasilah, Ahmed; Abedi, Ali; Wilkerson, DeLisa

    2017-01-01

    This study investigates the efficiency of a new method of powering remote sensors by the means of wireless energy transfer. The increased use of sensors for data collection comes with the inherent cost of supplying power from sources such as power cables or batteries. Wireless energy transfer technology eliminates the need for power cables or periodic battery replacement. The time and cost of setting up or expanding a sensor network will be reduced while allowing sensors to be placed in areas where running power cables or battery replacement is not feasible. This paper models wireless channels for power and data separately. Smart scheduling for the data channel is proposed to avoid transmitting data on a noisy channel where the probability of data loss is high to improve power efficiency. Analytical models have been developed and verified using simulations.

  20. Experimental and analytical study of thermal acoustic oscillations. [in the transfer and storage of cryogens

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Dean, W. G.; Karu, Z. S.

    1976-01-01

    The thermal acoustic oscillations (TAO) data base was expanded by running a large number of tubes over a wide range of parameters known to affect the TAO phenomenon. These parameters include tube length, wall thickness, diameter, material, insertion length and length-to-diameter ratio. Emphasis was placed on getting good boiloff data. A large quantity of data was obtained, reduced, correlated and analyzed and is presented. Also presented are comparisons with previous types of correlations. These comparisons show that the boiloff data did not correlate with intensity. The data did correlate in the form used by Rott, that is boiloff versus TAO pressure squared times frequency to the one-half power. However, this latter correlation required a different set of correlation constants, slope and intercept, for each tube tested.

  1. Detection of algorithmic trading

    NASA Astrophysics Data System (ADS)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  2. An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.

    NASA Technical Reports Server (NTRS)

    Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.

    1973-01-01

    A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.

  3. A Semi-Empirical Noise Modeling Method for Helicopter Maneuvering Flight Operations

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric; Schmitz, Fredric; Sickenberger, Richard D.

    2012-01-01

    A new model for Blade-Vortex Interaction noise generation during maneuvering flight is developed in this paper. Acoustic and performance data from both flight and wind tunnels are used to derive a non-dimensional and analytical performance/acoustic model that describes BVI noise in steady flight. The model is extended to transient maneuvering flight (pure pitch and roll transients) by using quasisteady assumptions throughout the prescribed maneuvers. Ground noise measurements, taken during maneuvering flight of a Bell 206B helicopter, show that many of the noise radiation details are captured. The result is a computationally efficient Blade-Vortex Interaction noise model with sufficient accuracy to account for transient maneuvering flight. The code can be run in real time to predict transient maneuver noise and is suitable for use in an acoustic mission-planning tool.

  4. A Primer on Infectious Disease Bacterial Genomics

    PubMed Central

    Petkau, Aaron; Knox, Natalie; Graham, Morag; Van Domselaar, Gary

    2016-01-01

    SUMMARY The number of large-scale genomics projects is increasing due to the availability of affordable high-throughput sequencing (HTS) technologies. The use of HTS for bacterial infectious disease research is attractive because one whole-genome sequencing (WGS) run can replace multiple assays for bacterial typing, molecular epidemiology investigations, and more in-depth pathogenomic studies. The computational resources and bioinformatics expertise required to accommodate and analyze the large amounts of data pose new challenges for researchers embarking on genomics projects for the first time. Here, we present a comprehensive overview of a bacterial genomics projects from beginning to end, with a particular focus on the planning and computational requirements for HTS data, and provide a general understanding of the analytical concepts to develop a workflow that will meet the objectives and goals of HTS projects. PMID:28590251

  5. Reference manual for generation and analysis of Habitat Time Series: version II

    USGS Publications Warehouse

    Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.

    1990-01-01

    The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.

  6. Commercial Earth Observation

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Through the Earth Observation Commercial Applications Program (EOCAP) at Stennis Space Center, Applied Analysis, Inc. developed a new tool for analyzing remotely sensed data. The Applied Analysis Spectral Analytical Process (AASAP) detects or classifies objects smaller than a pixel and removes the background. This significantly enhances the discrimination among surface features in imagery. ERDAS, Inc. offers the system as a modular addition to its ERDAS IMAGINE software package for remote sensing applications. EOCAP is a government/industry cooperative program designed to encourage commercial applications of remote sensing. Projects can run three years or more and funding is shared by NASA and the private sector participant. Through the Earth Observation Commercial Applications Program (EOCAP), Ocean and Coastal Environmental Sensing (OCENS) developed SeaStation for marine users. SeaStation is a low-cost, portable, shipboard satellite groundstation integrated with vessel catch and product monitoring software. Linked to the Global Positioning System, SeaStation provides real time relationships between vessel position and data such as sea surface temperature, weather conditions and ice edge location. This allows the user to increase fishing productivity and improve vessel safety. EOCAP is a government/industry cooperative program designed to encourage commercial applications of remote sensing. Projects can run three years or more and funding is shared by NASA and the private sector participant.

  7. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  8. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  9. A Monotonic Degradation Assessment Index of Rolling Bearings Using Fuzzy Support Vector Data Description and Running Time

    PubMed Central

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε̄ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε̄ describes the accelerating relationships between the damage development and running time. However, the index ε̄ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε̄ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly. PMID:23112591

  10. A monotonic degradation assessment index of rolling bearings using fuzzy support vector data description and running time.

    PubMed

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε⁻ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε⁻ describes the accelerating relationships between the damage development and running time. However, the index ε⁻ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε⁻ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly.

  11. A miniaturized capacitively coupled plasma microtorch optical emission spectrometer and a Rh coiled-filament as small-sized electrothermal vaporization device for simultaneous determination of volatile elements from liquid microsamples: spectral and analytical characterization.

    PubMed

    Frentiu, Tiberiu; Darvasi, Eugen; Butaciu, Sinziana; Ponta, Michaela; Petreus, Dorin; Mihaltan, Alin I; Frentiu, Maria

    2014-11-01

    A low power and low argon consumption (13.56 MHz, 15 W, 150 ml min(-1)) capacitively coupled plasma microtorch interfaced with a low-resolution microspectrometer and a small-sized electrothermal vaporization Rh coiled-filament as liquid microsample introduction device into the plasma was investigated for the simultaneous determination of several volatile elements of interest for environment. Constructive details, spectral and analytical characteristics, and optimum operating conditions of the laboratory equipment for the simultaneous determination of Ag, Cd, Cu, Pb and Zn requiring low vaporization power are provided. The method involves drying of 10 μl sample at 100°C, vaporization at 1500°C and emission measurement by capture of 20 successive spectral episodes each at an integration time of 500 ms. Experiments showed that emission of elements and plasma background were disturbed by the presence of complex matrix and hot Ar flow transporting the microsample into plasma. The emission spectrum of elements is simple, dominated by the resonance lines. The analytical system provided detection limits in the ng ml(-1) range: 0.5(Ag); 1.5(Cd); 5.6(Cu); 20(Pb) and 3(Zn) and absolute detection limits of the order of pg: 5(Ag); 15(Cd); 56(Cu); 200(Pb) and 30(Zn). It was demonstrated the utility and capability of the miniaturized analytical system in the simultaneous determination of elements in soil and water sediment using the standard addition method to compensate for the non-spectral effects of alkali and earth alkaline elements. The analysis of eight certified reference materials exhibited reliable results with recovery in the range of 95-108% and precision of 0.5-9.0% for the five examined elements. The proposed miniaturized analytical system is attractive due to the simple construction of the electrothermal vaporization device and microtorch, low costs associated to plasma generation, high analytical sensitivity and easy-to-run for simultaneous multielemental analysis of liquid microsamples. Copyright © 2014. Published by Elsevier B.V.

  12. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  13. 77 FR 50198 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...

  14. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  15. An analytical and experimental study of injection-locked two-port oscillators

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.; Downey, Alan N.

    1987-01-01

    A Ku-band IMPATT oscillator with two distinct output power ports was injection-locked alternately at both ports. The transmission locking bandwidth was nearly the same for either port. The lower free running power port had a reflection locking bandwidth that was narrower than its transmission locking one. Just the opposite was found at the other port. A detailed analytical model for two-port injection-locked oscillators is presented, and its results agree quite well with the experiments. A critique of the literature on this topic is included to clear up misconceptions and errors. It is concluded that two-port injection-locked oscillators may prove useful in certain communication systems.

  16. 40 CFR Table 1b to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...

  17. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  18. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  19. Blades Forced Vibration Under Aero-Elastic Excitation Modeled by Van der Pol

    NASA Astrophysics Data System (ADS)

    Pust, Ladislav; Pesek, Ludek

    This paper employs a new analytical approach to model the influence of aerodynamic excitation on the dynamics of a bladed cascade at the flutter state. The flutter is an aero-elastic phenomenon that is linked to the interaction of the flow and the traveling deformation wave in the cascade when only the damping of the cascade changes. As a case study the dynamic properties of the five-blade-bunch excited by the running harmonic external forces and aerodynamic self-excited forces are investigated. This blade-bunch is linked in the shroud by means of the viscous-elastic damping elements. The external running excitation depends on the ratio of stator and rotor blade numbers and corresponds to the real type of excitation in the steam turbine. The aerodynamic self-excited forces are modeled by two types of Van der Pol nonlinear models. The influence of the interaction of both types of self-excitation with the external running excitation is investigated on the response curves.

  20. Modeling of Aerodynamic Force Acting in Tunnel for Analysis of Riding Comfort in a Train

    NASA Astrophysics Data System (ADS)

    Kikko, Satoshi; Tanifuji, Katsuya; Sakanoue, Kei; Nanba, Kouichiro

    In this paper, we aimed to model the aerodynamic force that acts on a train running at high speed in a tunnel. An analytical model of the aerodynamic force is developed from pressure data measured on car-body sides of a test train running at the maximum revenue operation speed. The simulation of an 8-car train running while being subjected to the modeled aerodynamic force gives the following results. The simulated car-body vibration corresponds to the actual vibration both qualitatively and quantitatively for the cars at the rear of the train. The separation of the airflow at the tail-end of the train increases the yawing vibration of the tail-end car while it has little effect on the car-body vibration of the adjoining car. Also, the effect of the moving velocity of the aerodynamic force on the car-body vibration is clarified that the simulation under the assumption of a stationary aerodynamic force can markedly increase the car-body vibration.

Top