Parvin, C A
1993-03-01
The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.
SPIRE Data Evaluation and Nuclear IR Fluorescence Processes.
1982-11-30
so that all isotopes can be dealt with in a single run rather than a number of separate runs. At lower altitudes the radiance calculation needs to be...approximation can be inferred from the work of Neuendorffer (1982) on developing an analytic expression for the absorption of a single non-overlapping line...personnel by using prominent atmospheric infrared features such as the OH maximum, the HNO3 maximum, the CO3 4.3 um knee, etc. The azimuth however
SMC Standard: Evaluation and Test Requirements for Liquid Rocket Engines
2017-07-26
Run -Time Trends .................................................................................................... 53 7.2.4 Steady State Analytical...Administration, 2008. 22. M. Singh, J. Vargo, D. Schiffer and J. Dello, “Safe Diagram – A Design and Reliability Tool for Turbine Blading ,” Dresser-Rand...allowed starts and run ‐time including ground acceptance testing, on‐pad firings/aborts, and flight exposure. Part: A single piece (or two or more
Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.
Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A
2017-04-01
Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.
Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R; Seliskar, Carl J; Limbach, Patrick A; Heineman, William R
2010-08-01
Parallel separations using CE on a multilane microchip with multiplexed LIF detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be determined in parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pK(a) determination of small molecule analytes is demonstrated with the multilane microchip.
Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R.; Seliskar, Carl J.; Limbach, Patrick A.; Heineman, William R.
2010-01-01
Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be analyzed on parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pKa determination of small molecule analytes is demonstrated with the multilane microchip. PMID:20737446
Determination of residual cell culture media components by MEKC.
Zhang, Junge; Chakraborty, Utpal; Foley, Joe P
2009-11-01
Folic acid, hypoxanthine, mycophenolic acid, nicotinic acid, riboflavin, and xanthine are widely used as cell culture media components in monoclonal antibody manufacturing. These components are subsequently removed during the downstream purification processes. This article describes a single MEKC method that can simultaneously determine all the listed compounds with acceptable LOD and LOQ. All the analytes were successfully separated by MEKC using running buffer containing 40 mM SDS, 20 mM sodium phosphate, and 20 mM sodium borate at pH 9.0. The MEKC method was compared to the corresponding CZE method using the same running buffer containing no SDS. The effect of SDS concentration on separation, the pH of the running buffer, and the detection wavelength were studied and optimal MEKC conditions were established. Good linearity was obtained with correlation coefficients of more than 0.99 for all analytes. Specificity, accuracy, and precision were also evaluated. The recovery was in the range of 89-112%. The precision results were in the range of 1.7-4.8%. The experimentally determined data demonstrated that the MEKC method is applicable to the determination of the six analytes in in-process samples from monoclonal antibody manufacturing processes.
40 CFR Appendix B to Part 60 - Performance Specifications
Code of Federal Regulations, 2014 CFR
2014-07-01
... 6216-98 is the reference for design specifications, manufacturer's performance specifications, and test... representative of a group of monitors produced during a specified period or lot, for conformance with the design... technique and a single analytical program are used. One Run may include results for more than one test...
Comparison of SPHC Hydrocode Results with Penetration Equations and Results of Other Codes
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Stallworth, Roderick; Stellingwerf, Robert F.
2004-01-01
The SPHC hydrodynamic code was used to simulate impacts of spherical aluminum projectiles on a single-wall aluminum plate and on a generic Whipple shield. Simulations were carried out in two and three dimensions. Projectile speeds ranged from 2 kilometers per second to 10 kilometers per second for the single-wall runs, and from 3 kilometers per second to 40 kilometers per second for the Whipple shield runs. Spallation limit results of the single-wall simulations are compared with predictions from five standard penetration equations, and are shown to fall comfortably within the envelope of these analytical relations. Ballistic limit results of the Whipple shield simulations are compared with results from the AUTODYN-2D and PAM-SHOCK-3D codes presented in a paper at the Hypervelocity Impact Symposium 2000 and the Christiansen formulation of 2003.
Bichon, E; Guiffard, I; Vénisseau, A; Lesquin, E; Vaccher, V; Brosseaud, A; Marchand, P; Le Bizec, B
2016-08-12
A gas chromatography tandem mass spectrometry method using atmospheric pressure chemical ionisation was developed for the monitoring of 16 brominated flame retardants (7 usually monitored polybromodiphenylethers (PBDEs) and BDE #209 and 8 additional emerging and novel BFRs) in food and feed of animal origin. The developed analytical method has decreased the run time by three compared to conventional strategies, using a 2.5m column length (5% phenyl stationary phase, 0.1mm i.d., 0.1μmf.t.), a pulsed split injection (1:5) with carrier gas helium flow rate at 0.48mLmin(-1) in one run of 20 min. For most BFRs, analytical data were compared with the current analytical strategy relying on GC/EI/HRMS (double sector, R=10000 at 10% valley). Performances in terms of sensitivity were found to meet the Commission recommendation (118/2014/EC) for nBFRs. GC/APCI/MS/MS represents a promising alternative for multi-BFRs analysis in complex matrices, in that it allows the monitoring of a wider list of contaminants in a single injection and a shorter run time. Copyright © 2016 Elsevier B.V. All rights reserved.
Herath, H M D R; Shaw, P N; Cabot, P; Hewavitharana, A K
2010-06-15
The high-performance liquid chromatography (HPLC) column is capable of enrichment/pre-concentration of trace impurities in the mobile phase during the column equilibration, prior to sample injection and elution. These impurities elute during gradient elution and result in significant chromatographic peaks. Three types of purified water were tested for their impurity levels, and hence their performances as mobile phase, in HPLC followed by total ion current (TIC) mode of MS. Two types of HPLC-grade water produced 3-4 significant peaks in solvent blanks while LC/MS-grade water produced no peaks (although peaks were produced by LC/MS-grade water also after a few days of standing). None of the three waters produced peaks in HPLC followed by UV-Vis detection. These peaks, if co-eluted with analyte, are capable of suppressing or enhancing the analyte signal in a MS detector. As it is not common practice to run solvent blanks in TIC mode, when quantification is commonly carried out using single ion monitoring (SIM) or single or multiple reaction monitoring (SRM or MRM), the effect of co-eluting impurities on the analyte signal and hence on the accuracy of the results is often unknown to the analyst. Running solvent blanks in TIC mode, regardless of the MS mode used for quantification, is essential in order to detect this problem and to take subsequent precautions. Copyright (c) 2010 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
NASA Astrophysics Data System (ADS)
Wang, Aiming; Cheng, Xiaohan; Meng, Guoying; Xia, Yun; Wo, Lei; Wang, Ziyi
2017-03-01
Identification of rotor unbalance is critical for normal operation of rotating machinery. The single-disc and single-span rotor, as the most fundamental rotor-bearing system, has attracted research attention over a long time. In this paper, the continuous single-disc and single-span rotor is modeled as a homogeneous and elastic Euler-Bernoulli beam, and the forces applied by bearings and disc on the shaft are considered as point forces. A fourth-order non-homogeneous partial differential equation set with homogeneous boundary condition is solved for analytical solution, which expresses the unbalance response as a function of position, rotor unbalance and the stiffness and damping coefficients of bearings. Based on this analytical method, a novel Measurement Point Vector Method (MPVM) is proposed to identify rotor unbalance while operating. Only a measured unbalance response registered for four selected cross-sections of the rotor-shaft under steady-state operating conditions is needed when using the method. Numerical simulation shows that the detection error of the proposed method is very small when measurement error is negligible. The proposed method provides an efficient way for rotor balancing without test runs and external excitations.
Švarc-Gajić, Jaroslava; Clavijo, Sabrina; Suárez, Ruth; Cvetanović, Aleksandra; Cerdà, Víctor
2018-03-01
Cherry stems have been used in traditional medicine mostly for the treatment of urinary tract infections. Extraction with subcritical water, according to its selectivity, efficiency and other aspects, differs substantially from conventional extraction techniques. The complexity of plant subcritical water extracts is due to the ability of subcritical water to extract different chemical classes of different physico-chemical properties and polarities in a single run. In this paper, dispersive liquid-liquid microextraction (DLLME) with simultaneous derivatisation was optimised for the analysis of complex subcritical water extracts of cherry stems to allow simple and rapid preparation prior to gas chromatography-mass spectrometry (GC-MS). After defining optimal extracting and dispersive solvents, the optimised method was used for the identification of compounds belonging to different chemical classes in a single analytical run. The developed sample preparation protocol enabled simultaneous extraction and derivatisation, as well as convenient coupling with GC-MS analysis, reducing the analysis time and number of steps. The applied analytical protocol allowed simple and rapid chemical screening of subcritical water extracts and was used for the comparison of subcritical water extracts of sweet and sour cherry stems. Graphical abstract DLLME GC MS analysis of cherry stem extracts obtained by subcritical water.
Mechanisms and kinetics of cellulose fermentation for protein production
NASA Technical Reports Server (NTRS)
Dunlap, C. A.
1971-01-01
The development of a process (and ancillary processing and analytical techniques) to produce bacterial single-cell protein of good nutritional quality from waste cellulose is discussed. A fermentation pilot plant and laboratory were developed and have been in operation for about two years. Single-cell protein (SCP) can be produced from sugarcane bagasse--a typical agricultural cellulosic waste. The optimization and understanding of this process and its controlling variables are examined. Both batch and continuous fermentation runs have been made under controlled conditions in the 535 liter pilot plant vessel and in the laboratory 14-liter fermenters.
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
Study of tethered satellite active attitude control
NASA Technical Reports Server (NTRS)
Colombo, G.
1982-01-01
Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.
Evaluation of FUS-2000 urine analyzer: analytical properties and particle recognition.
Beňovská, Miroslava; Wiewiorka, Ondřej; Pinkavová, Jana
This study evaluates the performance of microscopic part of a hybrid analyzer FUS-2000 (Dirui Industrial Co., Changchun, China), its analytical properties and particle recognition. The evaluation of trueness, repeatability, detection limit, carry-over, linearity range and analytical stability was performed according to Dirui protocol guidelines designed by Dirui Company to guarantee the quality of the instrument. Trueness for low, medium and high-value concentrations was calculated with bias of 15.5, 4.7 and -6.6%, respectively. Detection limit of 5 Ery/μl was confirmed. Coefficient of variation of 11.0, 5.2 and 3.8% was measured for within-run repeatability of low, medium and high concentration. Between-run repeatability for daily quality control had coefficient of variation of 3.0%. Carry-over did not exceed 0.05%. Linearity was confirmed for range of 0-16,000 particles/μl (R 2 = 0.9997). The analytical stability had coefficient of variation of 4.3%. Out of 1258 analyzed urine samples, 362 positive were subjected to light microscopy urine sediment analysis and compared to the analyzer results. Cohen's kappa coefficients were calculated to express the concordance. Squared kappa coefficient was 0.927 (red blood cells), 0.888 (white blood cells), 0.908 (squamous epithelia), 0.634 (transitional epithelia), 0.628 (hyaline casts), 0.843 (granular casts) and 0.623 (bacteria). Single kappa coefficients were 0.885 (yeasts) and 0.756 (crystals), respectively. Aforementioned results show good analytical performance of the analyzer and tight agreement with light microscopy of urine sediment.
Panetta, Robert J; Jahren, A Hope
2011-05-30
Gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) is increasingly applied to food and metabolic studies for stable isotope analysis (δ(13) C), with the quantification of analyte concentration often obtained via a second alternative method. We describe a rapid direct transesterification of triacylglycerides (TAGs) for fatty acid methyl ester (FAME) analysis by GC-C-IRMS demonstrating robust simultaneous quantification of amount of analyte (mean r(2) =0.99, accuracy ±2% for 37 FAMEs) and δ(13) C (±0.13‰) in a single analytical run. The maximum FAME yield and optimal δ(13) C values are obtained by derivatizing with 10% (v/v) acetyl chloride in methanol for 1 h, while lower levels of acetyl chloride and shorter reaction times skewed the δ(13) C values by as much as 0.80‰. A Bland-Altman evaluation of the GC-C-IRMS measurements resulted in excellent agreement for pure oils (±0.08‰) and oils extracted from French fries (±0.49‰), demonstrating reliable simultaneous quantification of FAME concentration and δ(13) C values. Thus, we conclude that for studies requiring both the quantification of analyte and δ(13) C data, such as authentication or metabolic flux studies, GC-C-IRMS can be used as the sole analytical method. Copyright © 2011 John Wiley & Sons, Ltd.
Multiplexed electrokinetic sample fractionation, preconcentration and elution for proteomics.
Hua, Yujuan; Jemere, Abebaw B; Dragoljic, Jelena; Harrison, D Jed
2013-07-07
Both 6 and 8-channel integrated microfluidic sample pretreatment devices capable of performing "in space" sample fractionation, collection, preconcentration and elution of captured analytes via sheath flow assisted electrokinetic pumping are described. Coatings and monolithic polymer beds were developed for the glass devices to provide cationic surface charge and anodal electroosmotic flow for delivery to an electrospray emitter tip. A mixed cationic ([2-(methacryloyloxy)ethyl] trimethylammonium chloride) (META) and hydrophobic butyl methacrylate-based monolithic porous polymer, photopolymerized in the 6- or 8-fractionation channels, was used to capture and preconcentrate samples. A 0.45 wt% META loaded bed generated comparable anodic electroosmotic flow to the cationic polymer PolyE-323 coated channel segments in the device. The balanced electroosmotic flow allowed stable electrokinetic sheath flow to prevent cross contamination of separated protein fractions, while reducing protein/peptide adsorption on the channel walls. Sequential elution of analytes trapped in the SPE beds revealed that the monolithic columns could be efficiently used to provide sheath flow during elution of analytes, as demonstrated for neutral carboxy SNARF (residual signal, 0.08% RSD, n = 40) and charged fluorescein (residual signal, 2.5% n = 40). Elution from monolithic columns showed reproducible performance with peak area reproducibility of ~8% (n = 6 columns) in a single sequential elution and the run-to-run reproducibility was 2.4-6.7% RSD (n = 4) for elution from the same bed. The demonstrated ability of this device design and operation to elute from multiple fractionation beds into a single exit channel for sample analysis by fluorescence or electrospray mass spectrometry is a crucial component of an integrated fractionation and assay system for proteomics.
NASA Technical Reports Server (NTRS)
1981-01-01
The process development continued, with a total of nine crystal growth runs. One of these was a 150 kg run of 5 crystals of approximately 30 kg each. Several machine and process problems were corrected and the 150 kg run was as successful as previous long runs on CG2000 RC's. The accelerated recharge and growth will be attempted when the development program resumes at full capacity in FY '82. The automation controls (Automatic Grower Light Computer System) were integrated to the seed dip temperature, shoulder, and diameter sensors on the CG2000 RC development grower. Test growths included four crystals, which were grown by the computer/sensor system from seed dip through tail off. This system will be integrated on the Mod CG2000 grower during the next quarter. The analytical task included the completion and preliminary testing of the gas chromatograph portion of the Furnace Atmosphere Analysis System. The system can detect CO concentrations and will be expanded to oxygen and water analysis in FY '82.
Structural safety of trams in case of misguidance in a switch
NASA Astrophysics Data System (ADS)
Schindler, Christian; Schwickert, Martin; Simonis, Andreas
2010-08-01
Tram vehicles mainly operate on street tracks where sometimes misguidance in switches occurs due to unfavourable conditions. Generally, in this situation, the first running gear of the vehicle follows the bend track while the next running gears continue straight ahead. This leads to a constraint that can only be solved if the vehicle's articulation is damaged or the wheel derails. The last-mentioned situation is less critical in terms of safety and costs. Five different tram types, one of them high floor, the rest low floor, were examined analytically. Numerical simulation was used to determine which wheel would be the first to derail and what level of force is needed in the articulation area between two carbodies to make a tram derail. It was shown that with pure analytical simulation, only an idea of which tram type behaves better or worse in such a situation can be gained, while a three-dimensional computational simulation gives more realistic values for the forces that arise. Three of the four low-floor tram types need much higher articulation forces to make a wheel derail in a switch misguidance situation. One particular three-car type with two single-axle running gears underneath the centre car must be designed to withstand nearly three times higher articulation forces than a conventional high-floor articulated tram. Tram designers must be aware of that and should design the carbody accordingly.
Carney, Randy P.; Kim, Jin Young; Qian, Huifeng; Jin, Rongchao; Mehenni, Hakim; Stellacci, Francesco; Bakr, Osman M.
2011-01-01
Nanoparticles are finding many research and industrial applications, yet their characterization remains a challenge. Their cores are often polydisperse and coated by a stabilizing shell that varies in size and composition. No single technique can characterize both the size distribution and the nature of the shell. Advances in analytical ultracentrifugation allow for the extraction of the sedimentation (s) and diffusion coefficients (D). Here we report an approach to transform the s and D distributions of nanoparticles in solution into precise molecular weight (M), density (ρP) and particle diameter (dp) distributions. M for mixtures of discrete nanocrystals is found within 4% of the known quantities. The accuracy and the density information we achieve on nanoparticles are unparalleled. A single experimental run is sufficient for full nanoparticle characterization, without the need for standards or other auxiliary measurements. We believe that our method is of general applicability and we discuss its limitations. PMID:21654635
Streamflow variability and optimal capacity of run-of-river hydropower plants
NASA Astrophysics Data System (ADS)
Basso, S.; Botter, G.
2012-10-01
The identification of the capacity of a run-of-river plant which allows for the optimal utilization of the available water resources is a challenging task, mainly because of the inherent temporal variability of river flows. This paper proposes an analytical framework to describe the energy production and the economic profitability of small run-of-river power plants on the basis of the underlying streamflow regime. We provide analytical expressions for the capacity which maximize the produced energy as a function of the underlying flow duration curve and minimum environmental flow requirements downstream of the plant intake. Similar analytical expressions are derived for the capacity which maximize the economic return deriving from construction and operation of a new plant. The analytical approach is applied to a minihydro plant recently proposed in a small Alpine catchment in northeastern Italy, evidencing the potential of the method as a flexible and simple design tool for practical application. The analytical model provides useful insight on the major hydrologic and economic controls (e.g., streamflow variability, energy price, costs) on the optimal plant capacity and helps in identifying policy strategies to reduce the current gap between the economic and energy optimizations of run-of-river plants.
Wannet, W J; Hermans, J H; van Der Drift, C; Op Den Camp, H J
2000-02-01
A convenient and sensitive method was developed to separate and detect various types of carbohydrates (polyols, mono- and disaccharides, and phosphorylated sugars) simultaneously using high-performance liquid chromatography (HPLC). The method consists of a chromatographic separation on a CarboPac PA1 anion-exchange analytical column followed by pulsed amperometric detection. In a single run (43 min) 13 carbohydrates were readily resolved. Calibration plots were linear over the ranges of 5-25 microM to 1. 0-1.5 mM. The reliable and fast analysis technique, avoiding derivatization steps and long run times, was used to determine the levels of carbohydrates involved in mannitol and trehalose metabolism in the edible mushroom Agaricus bisporus. Moreover, the method was used to study the trehalose phosphorylase reaction.
Petruzziello, Filomena; Grand-Guillaume Perrenoud, Alexandre; Thorimbert, Anita; Fogwill, Michael; Rezzi, Serge
2017-07-18
Analytical solutions enabling the quantification of circulating levels of liposoluble micronutrients such as vitamins and carotenoids are currently limited to either single or a reduced panel of analytes. The requirement to use multiple approaches hampers the investigation of the biological variability on a large number of samples in a time and cost efficient manner. With the goal to develop high-throughput and robust quantitative methods for the profiling of micronutrients in human plasma, we introduce a novel, validated workflow for the determination of 14 fat-soluble vitamins and carotenoids in a single run. Automated supported liquid extraction was optimized and implemented to simultaneously parallelize 48 samples in 1 h, and the analytes were measured using ultrahigh-performance supercritical fluid chromatography coupled to tandem mass spectrometry in less than 8 min. An improved mass spectrometry interface hardware was built up to minimize the post-decompression volume and to allow better control of the chromatographic effluent density on its route toward and into the ion source. In addition, a specific make-up solvent condition was developed to ensure both analytes and matrix constituents solubility after mobile phase decompression. The optimized interface resulted in improved spray plume stability and conserved matrix compounds solubility leading to enhanced hyphenation robustness while ensuring both suitable analytical repeatability and improved the detection sensitivity. The overall developed methodology gives recoveries within 85-115%, as well as within and between-day coefficient of variation of 2 and 14%, respectively.
Single-frequency TEA CO2 laser with a bleaching spectral intracavity filter
NASA Astrophysics Data System (ADS)
Sorochenko, V. R.
2017-02-01
The regime of single-frequency operation is realised in a TEA CO2 laser with a spectral filter inside the cavity (a cell filled with SF6) on P(12)-P(24) lines of the 10P band. The minimal scatter of the peak powers of the laser pulses in a series of ‘shots’ and the maximal ratio of the output energies in the single-frequency and free running regimes (greater than 0.84) are obtained on the P(16) line at an optimal SF6 pressure in the cell. Experimental results qualitatively agree with the absorption spectrum of SF6 calculated from the SPECTRA information-analytical system. It is shown that the high ratio of energies in two regimes is achived due to gas bleaching in the cell.
IDSAC-IUCAA digital sampler array controller
NASA Astrophysics Data System (ADS)
Chattopadhyay, Sabyasachi; Chordia, Pravin; Ramaprakash, A. N.; Burse, Mahesh P.; Joshi, Bhushan; Chillal, Kalpesh
2016-07-01
In order to run the large format detector arrays and mosaics that are required by most astronomical instruments, readout electronic controllers are required which can process multiple CCD outputs simultaneously at high speeds and low noise levels. These CCD controllers need to be modular and configurable, should be able to run multiple detector types to cater to a wide variety of requirements. IUCAA Digital Sampler Array Controller (IDSAC), is a generic CCD Controller based on a fully scalable architecture which is adequately flexible and powerful enough to control a wide variety of detectors used in ground based astronomy. The controller has a modular backplane architecture that consists of Single Board Controller Cards (SBCs) and can control up to 5 CCDs (mosaic or independent). Each Single Board Controller (SBC) has all the resources to a run Single large format CCD having up to four outputs. All SBCs are identical and are easily interchangeable without needing any reconfiguration. A four channel video processor on each SBC can process up to four output CCDs with or without dummy outputs at 0.5 Megapixels/Sec/Channel with 16 bit resolution. Each SBC has a USB 2.0 interface which can be connected to a host computer via optional USB to Fibre converters. The SBC uses a reconfigurable hardware (FPGA) as a Master Controller. IDSAC offers Digital Correlated Double Sampling (DCDS) to eliminate thermal kTC noise. CDS performed in Digital domain (DCDS) has several advantages over its analog counterpart, such as - less electronics, faster readout and easier post processing. It is also flexible with sampling rate and pixel throughput while maintaining the core circuit topology intact. Noise characterization of the IDSAC CDS signal chain has been performed by analytical modelling and practical measurements. Various types of noise such as white, pink, power supply, bias etc. has been considered while creating an analytical noise model tool to predict noise of a controller system like IDSAC. Several tests are performed to measure the actual noise of IDSAC. The theoretical calculation matches very well with practical measurements within 10% accuracy.
Lin, Shu-Ling; Wang, Chih-Chieh; Fuh, Ming-Ren
2016-10-05
In this study, divinylbenzene (DVB) was used as the cross-linker to prepare alkyl methacrylate (AlMA) monoliths for incorporating π-π interactions between the aromatic analytes and AlMA-DVB monolithic stationary phases in capillary LC analysis. Various AlMA/DVB ratios were investigated to prepare a series of 30% AlMA-DVB monolithic stationary phases in fused-silica capillaries (250-μm i.d.). The physical properties (such as porosity, permeability, and column efficiency) of the synthesized AlMA-DVB monolithic columns were investigated for characterization. Isocratic elution of phenol derivatives was first employed to evaluate the suitability of the prepared AlMA-DVB columns for small molecule separation. The run-to-run (0.16-1.20%, RSD; n = 3) and column-to-column (0.26-2.95%, RSD; n = 3) repeatabilities on retention times were also examined using the selected AlMA-DVB monolithic columns. The π-π interactions between the aromatic ring and the DVB-based stationary phase offered better recognition on polar analytes with aromatic moieties, which resulted in better separation resolution of aromatic analytes on the AlMA-DVB monolithic columns. In order to demonstrate the capability of potential environmental and/or food safety applications, eight phenylurea herbicides with single benzene ring and seven sulfonamide antibiotics with polyaromatic moieties were analyzed using the selected AlMA-DVB monolithic columns. Copyright © 2016. Published by Elsevier B.V.
Kruk, Tamara; Ratnam, Sam; Preiksaitis, Jutta; Lau, Allan; Hatchette, Todd; Horsman, Greg; Van Caeseele, Paul; Timmons, Brian; Tipples, Graham
2012-10-01
We conducted a multicenter trial in Canada to assess the value of using trueness controls (TC) for rubella virus IgG and hepatitis B virus surface antibody (anti-HBs) serology to determine test performance across laboratories over time. TC were obtained from a single source with known international units. Seven laboratories using different test systems and kit lots included the TC in routine assay runs of the analytes. TC measurements of 1,095 rubella virus IgG and 1,195 anti-HBs runs were plotted on Levey-Jennings control charts for individual laboratories and analyzed using a multirule quality control (MQC) scheme as well as a single three-standard-deviation (3-SD) rule. All rubella virus IgG TC results were "in control" in only one of the seven laboratories. Among the rest, "out-of-control" results ranged from 5.6% to 10% with an outlier at 20.3% by MQC and from 1.1% to 5.6% with an outlier at 13.4% by the 3-SD rule. All anti-HBs TC results were "in control" in only two laboratories. Among the rest, "out-of-control" results ranged from 3.3% to 7.9% with an outlier at 19.8% by MQC and from 0% to 3.3% with an outlier at 10.5% by the 3-SD rule. In conclusion, through the continuous monitoring of assay performance using TC and quality control rules, our trial detected significant intra- and interlaboratory, test system, and kit lot variations for both analytes. In most cases the assay rejections could be attributable to the laboratories rather than to kit lots. This has implications for routine diagnostic screening and clinical practice guidelines and underscores the value of using an approach as described above for continuous quality improvement in result reporting and harmonization for these analytes.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Robles-Molina, José; Gilbert-López, Bienvenida; García-Reyes, Juan F; Molina-Díaz, Antonio
2017-09-29
Pesticide testing of foodstuffs is usually accomplished with generic wide-scope multi-residue methods based on liquid chromatography tandem mass spectrometry (LC-MS/MS). However, this approach does not cover some special pesticides, the so called "single-residue method" compounds, that are hardly compatible with standard reversed-phase (RP) separations due to their specific properties. In this article, we propose a comprehensive strategy for the integration of single residue method compounds and standard multiresidue pesticides within a single run. It is based on the use of a parallel LC column assembly with two different LC gradients performing orthogonal hydrophilic interaction chromatography (HILIC) and reversed-phase (RPLC) chromatography within one analytical run. Two sample aliquots were simultaneously injected on each column, using different gradients, being the eluents merged post-column prior to mass spectrometry detection. The approach was tested with 41 multiclass pesticides covering a wide range of physicochemical properties across several orders of log K ow (from -4 to +5.5). With this assembly, distinct separation from the void was attained for all the pesticides studied, keeping similar performance in terms of sensitivity, peak area reproducibility (<6 RSD% in most cases) and retention time stability of standard single column approaches (better than±0.1min). The application of the proposed approach using parallel HILIC/RPLC and RPLC/aqueous normal phase (Obelisc) were assessed in leek using LC-MS/MS. For this purpose, a hybrid QuEChERS (Quick, easy, cheap, effective, rugged and safe)/QuPPe (quick method for polar pesticides) method was evaluated based on solvent extraction with MeOH and acetonitrile followed by dispersive solid-phase extraction, delivering appropriate recoveries for most of the pesticides included in the study within the log K ow in the range from -4 to +5.5. The proposed strategy may be extended to other fields such as sport drug testing or environmental analysis, where the same type of variety of analytes featuring poor retention within a single chromatographic separation occurs. Copyright © 2017 Elsevier B.V. All rights reserved.
Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana
2015-08-14
SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.
Xiong, Yeping; Zhao, Yuan-Yuan; Goruk, Sue; Oilund, Kirsten; Field, Catherine J; Jacobs, René L; Curtis, Jonathan M
2012-12-12
A hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) method was developed and validated to simultaneously quantify six aqueous choline-related compounds and eight major phospholipids classes in a single run. HILIC chromatography was coupled to positive ion electrospray mass spectrometry. A combination of multiple scan modes including precursor ion scan, neutral loss scan and multiple reaction monitoring was optimized for the determination of each compound or class in a single LC/MS run. This work developed a simplified extraction scheme in which both free choline and related compounds along with phospholipids were extracted into a homogenized phase using chloroform/methanol/water (1:2:0.8) and diluted into methanol for the analysis of target compounds in a variety of sample matrices. The analyte recoveries were evaluated by spiking tissues and food samples with two isotope-labeled internal standards, PC-d(3) and Cho-d(3). Recoveries of between 90% and 115% were obtained by spiking a range of sample matrices with authentic standards containing all 14 of the target analytes. The precision of the analysis ranged from 1.6% to 13%. Accuracy and precision was comparable to that obtained by quantification of selected phospholipid classes using (31)P NMR. A variety of sample matrices including egg yolks, human diets and animal tissues were analyzed using the validated method. The measurements of total choline in selected foods were found to be in good agreement with values obtained from the USDA choline database. Copyright © 2012 Elsevier B.V. All rights reserved.
On the Modeling and Management of Cloud Data Analytics
NASA Astrophysics Data System (ADS)
Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni
A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Werner, S.L.; Johnson, S.M.
1994-01-01
As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.
Developing the role of big data and analytics in health professional education.
Ellaway, Rachel H; Pusic, Martin V; Galbraith, Robert M; Cameron, Terri
2014-03-01
As we capture more and more data about learners, their learning, and the organization of their learning, our ability to identify emerging patterns and to extract meaning grows exponentially. The insights gained from the analyses of these large amounts of data are only helpful to the extent that they can be the basis for positive action such as knowledge discovery, improved capacity for prediction, and anomaly detection. Big Data involves the aggregation and melding of large and heterogeneous datasets while education analytics involves looking for patterns in educational practice or performance in single or aggregate datasets. Although it seems likely that the use of education analytics and Big Data techniques will have a transformative impact on health professional education, there is much yet to be done before they can become part of mainstream health professional education practice. If health professional education is to be accountable for its programs run and are developed, then health professional educators will need to be ready to deal with the complex and compelling dynamics of analytics and Big Data. This article provides an overview of these emerging techniques in the context of health professional education.
Drift and Behavior of E. coli Cells
NASA Astrophysics Data System (ADS)
Micali, Gabriele; Colin, Rémy; Sourjik, Victor; Endres, Robert G.
2017-12-01
Chemotaxis of the bacterium Escherichia coli is well understood in shallow chemical gradients, but its swimming behavior remains difficult to interpret in steep gradients. By focusing on single-cell trajectories from simulations, we investigated the dependence of the chemotactic drift velocity on attractant concentration in an exponential gradient. While maxima of the average drift velocity can be interpreted within analytical linear-response theory of chemotaxis in shallow gradients, limits in drift due to steep gradients and finite number of receptor-methylation sites for adaptation go beyond perturbation theory. For instance, we found a surprising pinning of the cells to the concentration in the gradient at which cells run out of methylation sites. To validate the positions of maximal drift, we recorded single-cell trajectories in carefully designed chemical gradients using microfluidics.
Temova-Rakuša, Žane; Srečnik, Eva; Roškar, Robert
2017-09-01
A precise, accurate and rapid HPLC-UV method for simultaneous determination of fat-soluble vitamins (vitamin D3, E-acetate, K1, β-carotene, A-palmitate) and coenzyme Q10 was developed and validated according to ICH guidelines. Optimal chromatographic separation of the analytes in minimal analysis time (8 min) was achieved on a Luna C18 150 × 4.6 mm column using a mixture of acetonitrile, tetrahydrofuran and water (50:45:5, v/v/v). The described reversed phase HPLC method is the first published for quantification of these five fat-soluble vitamins and coenzyme Q10 within a single chromatographic run. The method was further applied for quantification of the analytes in selected liquid and solid dosage forms, registered as nutritional supplements and prescription medicines, which confirmed its suitability for routine analysis.
Design, fabrication and test of graphite/epoxy metering truss structure components, phase 3
NASA Technical Reports Server (NTRS)
1974-01-01
The design, materials, tooling, manufacturing processes, quality control, test procedures, and results associated with the fabrication and test of graphite/epoxy metering truss structure components exhibiting a near zero coefficient of thermal expansion are described. Analytical methods were utilized, with the aid of a computer program, to define the most efficient laminate configurations in terms of thermal behavior and structural requirements. This was followed by an extensive material characterization and selection program, conducted for several graphite/graphite/hybrid laminate systems to obtain experimental data in support of the analytical predictions. Mechanical property tests as well as the coefficient of thermal expansion tests were run on each laminate under study, the results of which were used as the selection criteria for the single most promising laminate. Further coefficient of thermal expansion measurement was successfully performed on three subcomponent tubes utilizing the selected laminate.
Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroads
ERIC Educational Resources Information Center
Jones, Kyle M. L.; Salo, Dorothea
2018-01-01
In this paper, the authors address learning analytics and the ways academic libraries are beginning to participate in wider institutional learning analytics initiatives. Since there are moral issues associated with learning analytics, the authors consider how data mining practices run counter to ethical principles in the American Library…
Analytical performance of a bronchial genomic classifier.
Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean
2016-02-26
The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.
NASA Technical Reports Server (NTRS)
Satyanarayana, T.; Klein, Harold P.
1976-01-01
A procedure for the purification of a stable acetyl-coenzyme A synthetase (ACS) from aerobic cells of Saccharomyces cerevisiae is presented. The steps include differential centrifugation, solubilization of the bound enzyme from the crude mitochondrial fraction, ammonium sulfate fractionation, crystallization to constant specific activity from ammonium sulfate solutions followed by Bio-Gel A-1.5 m column chromatography. The resulting enzyme preparation is homogeneous as judged by chromatography on Bio-Gel columns, QAE-Sephadex A-50 anion exchange columns, analytical ultracentrifugal studies, and polyacrylamide gel electrophoresis. Sedimentation velocity runs revealed a single symmetric peak with an s(sub (20,w)) value of 10.6. The molecular weight of the native enzyme, as determined by gel filtration and analytical ultracentrifugation, is 250,000 +/- 500. In polyacrylamide gel electrophoresis in the presence of sodium dodecyl sulfate, the molecular weight of the single polypeptide chain is 83,000 +/- 500. The purified enzyme is inhibited by palmityl-coenzyme A with a Hill interaction coefficient, n, of 2.88. These studies indicate that the ACS of aerobic Saccharomyces cerevisiae is composed of three subunits of identical or nearly identical size.
Tsunami Wave Run-up on a Vertical Wall in Tidal Environment
NASA Astrophysics Data System (ADS)
Didenkulova, Ira; Pelinovsky, Efim
2018-04-01
We solve analytically a nonlinear problem of shallow water theory for the tsunami wave run-up on a vertical wall in tidal environment. Shown that the tide can be considered static in the process of tsunami wave run-up. In this approximation, it is possible to obtain the exact solution for the run-up height as a function of the incident wave height. This allows us to investigate the tide influence on the run-up characteristics.
Performance implications from sizing a VM on multi-core systems: A Data analytic application s view
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Horey, James L; Begoli, Edmon
In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less
ERIC Educational Resources Information Center
Clark, Hewitt B.; Crosland, Kimberly A.; Geller, David; Cripe, Michael; Kenney, Terresa; Neff, Bryon; Dunlap, Glen
2008-01-01
Teenagers' running from foster placement is a significant problem in the field of child protection. This article describes a functional, behavior analytic approach to reducing running away through assessing the motivations for running, involving the youth in the assessment process, and implementing interventions to enhance the reinforcing value of…
NASA Astrophysics Data System (ADS)
Cao, Liang; Liu, Jiepeng; Li, Jiang; Zhang, Ruizhi
2018-04-01
An extensive experimental and theoretical research study was undertaken to study the vibration serviceability of a long-span prestressed concrete floor system to be used in the lounge of a major airport. Specifically, jumping impact tests were carried out to obtain the floor's modal parameters, followed by an analysis of the distribution of peak accelerations. Running tests were also performed to capture the acceleration responses. The prestressed concrete floor was found to have a low fundamental natural frequency (≈ 8.86 Hz) corresponding to the average modal damping ratio of ≈ 2.17%. A coefficients β rp is proposed for convenient calculation of the maximum root-mean-square acceleration for running. In the theoretical analysis, the prestressed concrete floor under running excitation is treated as a two-span continuous anisotropic rectangular plate with simply-supported edges. The calculated analytical results (natural frequencies and root-mean-square acceleration) agree well with the experimental ones. The analytical approach is thus validated.
High Resolution Nature Runs and the Big Data Challenge
NASA Technical Reports Server (NTRS)
Webster, W. Phillip; Duffy, Daniel Q.
2015-01-01
NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.
2015-12-01
Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.
Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona
2018-05-01
The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.
Batta, N; Pilli, N R; Derangula, V R; Vurimindi, H B; Damaramadugu, R; Yejella, R P
2015-03-01
The authors proposed a simple, rapid and sensitive liquid chromatography-tandem mass spectrometric (LC-MS/MS) assay method for the simultaneous determination of saxagliptin and its active metabolite 5-hydroxy saxagliptin in human plasma. The developed method was fully validated as per the US FDA guidelines. The method utilized stable labeled isotopes saxagliptin-15 N d2 (IS1) and 5-hydroxy saxagliptin-15 N-d2 (IS2) as internal standards for the quantification of saxagliptin and 5-hydroxy saxagliptin, respectively. Analytes and the internal standards were extracted from human plasma by a single step solid-phase extraction technique without drying, evaporation and reconstitution steps. The optimized mobile phase was composed of 0.1% acetic acid in 5 mM ammonium acetate and acetonitrile (30:70, v/v) and delivered at a flow rate of 0.85 mL/min. The method exhibits the linear calibration range of 0.05-100 ng/mL for both the analytes. The precision and accuracy results for both the analytes were well within the acceptance limits. The different stability experiments conducted in aqueous samples and in matrix samples are meeting the acceptance criteria. The chromatographic run time was set at 1.8 min; hence more than 400 samples can be analyzed in a single day. © Georg Thieme Verlag KG Stuttgart · New York.
Chen, Jie; Tabatabaei, Ali; Zook, Doug; Wang, Yan; Danks, Anne; Stauber, Kathe
2017-11-30
A robust high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay was developed and qualified for the measurement of cyclic nucleotides (cNTs) in rat brain tissue. Stable isotopically labeled 3',5'-cyclic adenosine- 13 C 5 monophosphate ( 13 C 5 -cAMP) and 3',5'-cyclic guanosine- 13 C, 15 N 2 monophosphate ( 13 C 15 N 2 -cGMP) were used as surrogate analytes to measure endogenous 3',5'-cyclic adenosine monophosphate (cAMP) and 3',5'-cyclic guanosine monophosphate (cGMP). Pre-weighed frozen rat brain samples were rapidly homogenized in 0.4M perchloric acid at a ratio of 1:4 (w/v). Following internal standard addition and dilution, the resulting extracts were analyzed using negative ion mode electrospray ionization LC-MS/MS. The calibration curves for both analytes ranged from 5 to 2000ng/g and showed excellent linearity (r 2 >0.996). Relative surrogate analyte-to-analyte LC-MS/MS responses were determined to correct concentrations derived from the surrogate curves. The intra-run precision (CV%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was below 6.6% and 7.4%, respectively, while the inter-run precision (CV%) was 8.5% and 5.8%, respectively. The intra-run accuracy (Dev%) for 13 C 5 -cAMP and 13 C 15 N 2 -cGMP was <11.9% and 10.3%, respectively, and the inter-run Dev% was <6.8% and 5.5%, respectively. Qualification experiments demonstrated high analyte recoveries, minimal matrix effects and low autosampler carryover. Acceptable frozen storage, freeze/thaw, benchtop, processed sample and autosampler stability were shown in brain sample homogenates as well as post-processed samples. The method was found to be suitable for the analysis of rat brain tissue cAMP and cGMP levels in preclinical biomarker development studies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Quantitative charge-tags for sterol and oxysterol analysis.
Crick, Peter J; William Bentley, T; Abdel-Khalik, Jonas; Matthews, Ian; Clayton, Peter T; Morris, Andrew A; Bigger, Brian W; Zerbinati, Chiara; Tritapepe, Luigi; Iuliano, Luigi; Wang, Yuqin; Griffiths, William J
2015-02-01
Global sterol analysis is challenging owing to the extreme diversity of sterol natural products, the tendency of cholesterol to dominate in abundance over all other sterols, and the structural lack of a strong chromophore or readily ionized functional group. We developed a method to overcome these challenges by using different isotope-labeled versions of the Girard P reagent (GP) as quantitative charge-tags for the LC-MS analysis of sterols including oxysterols. Sterols/oxysterols in plasma were extracted in ethanol containing deuterated internal standards, separated by C18 solid-phase extraction, and derivatized with GP, with or without prior oxidation of 3β-hydroxy to 3-oxo groups. By use of different isotope-labeled GPs, it was possible to analyze in a single LC-MS analysis both sterols/oxysterols that naturally possess a 3-oxo group and those with a 3β-hydroxy group. Intra- and interassay CVs were <15%, and recoveries for representative oxysterols and cholestenoic acids were 85%-108%. By adopting a multiplex approach to isotope labeling, we analyzed up to 4 different samples in a single run. Using plasma samples, we could demonstrate the diagnosis of inborn errors of metabolism and also the export of oxysterols from brain via the jugular vein. This method allows the profiling of the widest range of sterols/oxysterols in a single analytical run and can be used to identify inborn errors of cholesterol synthesis and metabolism. © 2014 American Association for Clinical Chemistry.
Zeng, Shanshan; Wang, Lu; Chen, Teng; Wang, Yuefei; Mo, Huanbiao; Qu, Haibin
2012-07-06
The paper presents a novel strategy to identify analytical markers of traditional Chinese medicine preparation (TCMP) rapidly via direct analysis in real time mass spectrometry (DART-MS). A commonly used TCMP, Danshen injection, was employed as a model. The optimal analysis conditions were achieved by measuring the contribution of various experimental parameters to the mass spectra. Salvianolic acids and saccharides were simultaneously determined within a single 1-min DART-MS run. Furthermore, spectra of Danshen injections supplied by five manufacturers were processed with principal component analysis (PCA). Obvious clustering was observed in the PCA score plot, and candidate markers were recognized from the contribution plots of PCA. The suitability of potential markers was then confirmed by contrasting with the results of traditional analysis methods. Using this strategy, fructose, glucose, sucrose, protocatechuic aldehyde and salvianolic acid A were rapidly identified as the markers of Danshen injections. The combination of DART-MS with PCA provides a reliable approach to the identification of analytical markers for quality control of TCMP. Copyright © 2012 Elsevier B.V. All rights reserved.
Dasgupta, Soma; Banerjee, Kaushik; Dhumal, Kondiba N; Adsule, Pandurang G
2011-01-01
This paper describes single-laboratory validation of a multiresidue method for the determination of 135 pesticides, 12 dioxin-like polychlorinated biphenyls, 12 polyaromatic hydrocarbons, and bisphenol A in grapes and wine by GC/time-of-flight MS in a total run time of 48 min. The method is based on extraction with ethyl acetate in a sample-to-solvent ratio of 1:1, followed by selective dispersive SPE cleanup for grapes and wine. The GC/MS conditions were optimized for the chromatographic separation and to achieve highest S/N for all 160 target analytes, including the temperature-sensitive compounds, like captan and captafol, that are prone to degradation during analysis. An average recovery of 80-120% with RSD < 10% could be attained for all analytes except 17, for which the average recoveries were 70-80%. LOQ ranged within 10-50 ng/g, with < 25% expanded uncertainties, for 155 compounds in grapes and 151 in wine. In the incurred grape and wine samples, the residues of buprofezin, chlorpyriphos, metalaxyl, and myclobutanil were detected, with an RSD of < 5% (n = 6); the results were statistically similar to previously reported validated methods.
Van Os, E C; McKinney, J A; Zins, B J; Mays, D C; Schriver, Z H; Sandborn, W J; Lipsky, J J
1996-04-26
A specific, sensitive, single-step solid-phase extraction and reversed-phase high-performance liquid chromatographic method for the simultaneous determination of plasma 6-mercaptopurine and azathioprine concentrations is reported. Following solid-phase extraction, analytes are separated on a C18 column with mobile phase consisting of 0.8% acetonitrile in 1 mM triethylamine, pH 3.2, run on a gradient system. Quantitation limits were 5 ng/ml and 2 ng/ml for azathioprine and 6-mercaptopurine, respectively. Peak heights correlated linearly to known extracted standards for 6-mercaptopurine and azathioprine (r = 0.999) over a range of 2-200 ng/ml. No chromatographic interferences were detected.
Effects of Physical Training in Military Populations: A Meta-Analytic Summary
2010-10-25
variation on standard training. The experiment introduced ability group runs, stretching, movement drills, and calisthenics . The calisthenics ...advanced training. The new program combined progressive calisthenics with movement exercises, interval running, and ability-group endurance runs. The new...al. (2004) Modified Calisthenics Program in Advanced Training Outcome Gender g SE ESa zb Sig Sit-ups Men .38 .04 .14 3.45 .000 Women .43
Ho, Sirikit; Lukacs, Zoltan; Hoffmann, Georg F; Lindner, Martin; Wetter, Thomas
2007-07-01
In newborn screening with tandem mass spectrometry, multiple intermediary metabolites are quantified in a single analytical run for the diagnosis of fatty-acid oxidation disorders, organic acidurias, and aminoacidurias. Published diagnostic criteria for these disorders normally incorporate a primary metabolic marker combined with secondary markers, often analyte ratios, for which the markers have been chosen to reflect metabolic pathway deviations. We applied a procedure to extract new markers and diagnostic criteria for newborn screening to the data of newborns with confirmed medium-chain acyl-CoA dehydrogenase deficiency (MCADD) and a control group from the newborn screening program, Heidelberg, Germany. We validated the results with external data of the screening center in Hamburg, Germany. We extracted new markers by performing a systematic search for analyte combinations (features) with high discriminatory performance for MCADD. To select feature thresholds, we applied automated procedures to separate controls and cases on the basis of the feature values. Finally, we built classifiers from these new markers to serve as diagnostic criteria in screening for MCADD. On the basis of chi(2) scores, we identified approximately 800 of >628,000 new analyte combinations with superior discriminatory performance compared with the best published combinations. Classifiers built with the new features achieved diagnostic sensitivities and specificities approaching 100%. Feature construction methods provide ways to disclose information hidden in the set of measured analytes. Other diagnostic tasks based on high-dimensional metabolic data might also profit from this approach.
Customisation of the exome data analysis pipeline using a combinatorial approach.
Pattnaik, Swetansu; Vaidyanathan, Srividya; Pooja, Durgad G; Deepak, Sa; Panda, Binay
2012-01-01
The advent of next generation sequencing (NGS) technologies have revolutionised the way biologists produce, analyse and interpret data. Although NGS platforms provide a cost-effective way to discover genome-wide variants from a single experiment, variants discovered by NGS need follow up validation due to the high error rates associated with various sequencing chemistries. Recently, whole exome sequencing has been proposed as an affordable option compared to whole genome runs but it still requires follow up validation of all the novel exomic variants. Customarily, a consensus approach is used to overcome the systematic errors inherent to the sequencing technology, alignment and post alignment variant detection algorithms. However, the aforementioned approach warrants the use of multiple sequencing chemistry, multiple alignment tools, multiple variant callers which may not be viable in terms of time and money for individual investigators with limited informatics know-how. Biologists often lack the requisite training to deal with the huge amount of data produced by NGS runs and face difficulty in choosing from the list of freely available analytical tools for NGS data analysis. Hence, there is a need to customise the NGS data analysis pipeline to preferentially retain true variants by minimising the incidence of false positives and make the choice of right analytical tools easier. To this end, we have sampled different freely available tools used at the alignment and post alignment stage suggesting the use of the most suitable combination determined by a simple framework of pre-existing metrics to create significant datasets.
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.
2017-12-01
The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.
Analytical characterization of wine and its precursors by capillary electrophoresis.
Gomez, Federico J V; Monasterio, Romina P; Vargas, Verónica Carolina Soto; Silva, María F
2012-08-01
The accurate determination of marker chemical species in grape, musts, and wines presents a unique analytical challenge with high impact on diverse areas of knowledge such as health, plant physiology, and economy. Capillary electromigration techniques have emerged as a powerful tool, allowing the separation and identification of highly polar compounds that cannot be easily separated by traditional HPLC methods, providing complementary information and permitting the simultaneous analysis of analytes with different nature in a single run. The main advantage of CE over traditional methods for wine analysis is that in most cases samples require no treatment other than filtration. The purpose of this article is to present a revision on capillary electromigration methods applied to the analysis of wine and its precursors over the last decade. The current state of the art of the topic is evaluated, with special emphasis on the natural compounds that have allowed wine to be considered as a functional food. The most representative revised compounds are phenolic compounds, amino acids, proteins, elemental species, mycotoxins, and organic acids. Finally, a discussion on future trends of the role of capillary electrophoresis in the field of analytical characterization of wines for routine analysis, wine classification, as well as multidisciplinary aspects of the so-called "from soil to glass" chain is presented. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
RunJumpCode: An Educational Game for Educating Programming
ERIC Educational Resources Information Center
Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon
2017-01-01
Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…
Orsatti, Laura; Speziale, Roberto; Orsale, Maria Vittoria; Caretti, Fulvia; Veneziano, Maria; Zini, Matteo; Monteagudo, Edith; Lyons, Kathryn; Beconi, Maria; Chan, Kelvin; Herbst, Todd; Toledo-Sherman, Leticia; Munoz-Sanjuan, Ignacio; Bonelli, Fabio; Dominguez, Celia
2015-03-25
Neuroactive metabolites in the kynurenine pathway of tryptophan catabolism are associated with neurodegenerative disorders. Tryptophan is transported across the blood-brain barrier and converted via the kynurenine pathway to N-formyl-L-kynurenine, which is further degraded to L-kynurenine. This metabolite can then generate a group of metabolites called kynurenines, most of which have neuroactive properties. The association of tryptophan catabolic pathway alterations with various central nervous system (CNS) pathologies has raised interest in analytical methods to accurately quantify kynurenines in body fluids. We here describe a rapid and sensitive reverse-phase HPLC-MS/MS method to quantify L-kynurenine (KYN), kynurenic acid (KYNA), 3-hydroxy-L-kynurenine (3HK) and anthranilic acid (AA) in rat plasma. Our goal was to quantify these metabolites in a single run; given their different physico-chemical properties, major efforts were devoted to develop a chromatography suitable for all metabolites that involves plasma protein precipitation with acetonitrile followed by chromatographic separation by C18 RP chromatography, detected by electrospray mass spectrometry. Quantitation range was 0.098-100 ng/ml for 3HK, 9.8-20,000 ng/ml for KYN, 0.49-1000 ng/ml for KYNA and AA. The method was linear (r>0.9963) and validation parameters were within acceptance range (calibration standards and QC accuracy within ±30%). Copyright © 2015 Elsevier B.V. All rights reserved.
Passive Nosetip Technology (PANT) Program. Volume X. Summary of Experimental and Analytical Results
1975-01-01
Scallop Calorimeter Data with Sandgrain Type Calorimeter Data 3-22 4-1 Geometry for 1.5-Inch Nose Radius Camphor Model 4-3 4-2 Shape Profile History for... camphor model tested at Re. - 5.104/ft and t - 5 in the NOL hypersonic wind Tunnel Number S. (a) Run 007, Sting 2 -Graphite (b) PANT Run 204 - Camphor ...Laminar region (a) Run 006, Sting 2 -Graphite (b) PANT Run 216 - Camphor low temperature ablator Figure 2-2. Comparison of Transitional Shapes The
Reciprocity relationships in vector acoustics and their application to vector field calculations.
Deal, Thomas J; Smith, Kevin B
2017-08-01
The reciprocity equation commonly stated in underwater acoustics relates pressure fields and monopole sources. It is often used to predict the pressure measured by a hydrophone for multiple source locations by placing a source at the hydrophone location and calculating the field everywhere for that source. A similar equation that governs the orthogonal components of the particle velocity field is needed to enable this computational method to be used for acoustic vector sensors. This paper derives a general reciprocity equation that accounts for both monopole and dipole sources. This vector-scalar reciprocity equation can be used to calculate individual components of the received vector field by altering the source type used in the propagation calculation. This enables a propagation model to calculate the received vector field components for an arbitrary number of source locations with a single model run for each vector field component instead of requiring one model run for each source location. Application of the vector-scalar reciprocity principle is demonstrated with analytic solutions for a range-independent environment and with numerical solutions for a range-dependent environment using a parabolic equation model.
Damm, Irina; Enger, Eileen; Chrubasik-Hausmann, Sigrun; Schieber, Andreas; Zimmermann, Benno F
2016-08-01
Fast methods for the extraction and analysis of various secondary metabolites from cocoa products were developed and optimized regarding speed and separation efficiency. Extraction by pressurized liquid extraction is automated and the extracts are analyzed by rapid reversed-phase ultra high-performance liquid chromatography and normal-phase high-performance liquid chromatography methods. After extraction, no further sample treatment is required before chromatographic analysis. The analytes comprise monomeric and oligomeric flavanols, flavonols, methylxanthins, N-phenylpropenoyl amino acids, and phenolic acids. Polyphenols and N-phenylpropenoyl amino acids are separated in a single run of 33 min, procyanidins are analyzed by normal-phase high-performance liquid chromatography within 16 min, and methylxanthins require only 6 min total run time. A fourth method is suitable for phenolic acids, but only protocatechuic acid was found in relevant quantities. The optimized methods were validated and applied to 27 dark chocolates, one milk chocolate, two cocoa powders and two food supplements based on cocoa extract. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Single cell multiplexed assay for proteolytic activity using droplet microfluidics.
Ng, Ee Xien; Miller, Miles A; Jing, Tengyang; Chen, Chia-Hung
2016-07-15
Cellular enzymes interact in a post-translationally regulated fashion to govern individual cell behaviors, yet current platform technologies are limited in their ability to measure multiple enzyme activities simultaneously in single cells. Here, we developed multi-color Förster resonance energy transfer (FRET)-based enzymatic substrates and use them in a microfluidics platform to simultaneously measure multiple specific protease activities from water-in-oil droplets that contain single cells. By integrating the microfluidic platform with a computational analytical method, Proteolytic Activity Matrix Analysis (PrAMA), we are able to infer six different protease activity signals from individual cells in a high throughput manner (~100 cells/experimental run). We characterized protease activity profiles at single cell resolution for several cancer cell lines including breast cancer cell line MDA-MB-231, lung cancer cell line PC-9, and leukemia cell line K-562 using both live-cell and in-situ cell lysis assay formats, with special focus on metalloproteinases important in metastasis. The ability to measure multiple proteases secreted from or expressed in individual cells allows us to characterize cell heterogeneity and has potential applications including systems biology, pharmacology, cancer diagnosis and stem cell biology. Copyright © 2016 Elsevier B.V. All rights reserved.
Flint, Robert B; Bahmany, Soma; van der Nagel, Bart C H; Koch, Birgit C P
2018-05-16
A simple and specific UPLC-MS/MS method was developed and validated for simultaneous quantification of fentanyl, sufentanil, cefazolin, doxapram and its active metabolite keto-doxapram. The internal standard was fentanyl-d5 for all analytes. Chromatographic separation was achieved with a reversed phase Acquity UPLC HSS T3 column with a run-time of only 5.0 minutes per injected sample. Gradient elution was performed with a mobile phase consisting of ammonium acetate, formic acid in Milli-Q ultrapure water or in methanol with a total flow rate of 0.4 mL minute -1 . A plasma volume of only 50 μL was required to achieve both adequate accuracy and precision. Calibration curves of all 5 analytes were linear. All analytes were stable for at least 48 hours in the autosampler. The method was validated according to US Food and Drug Administration guidelines. This method allows quantification of fentanyl, sufentanil, cefazolin, doxapram and keto-doxapram, which serves purposes for research, as well as therapeutic drug monitoring, if applicable. The strength of this method is the combination of a small sample volume, a short run-time, a deuterated internal standard, an easy sample preparation method and the ability to simultaneously quantify all analytes in one run. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Jie; Thiel, Walter
2018-04-01
We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.
Gao, Hui; Yang, Minli; Wang, Minglin; Zhao, Yansheng; Cao, Ya; Chu, Xiaogang
2013-01-01
A method combining SPE with HPLC/electrospray ionization-MS/MS was developed for simultaneous determination of 30 synthetic food additives, including synthetic colorants, preservatives, and sweeteners in soft drinks. All targets were efficiently separated using the optimized chromatographic and MS conditions and parameters in a single run within 18 min. The LOD of the analytes ranged from 0.01 to 20 microg/kg, and the method was validated with recoveries in the 80.8 to 106.4% range. This multisynthetic additive method was found to be accurate and reliable and will be useful to ensure the safety of food products, such as the labeling and proper use of synthetic food additives in soft drinks.
Principles and Applications of Liquid Chromatography-Mass Spectrometry in Clinical Biochemistry
Pitt, James J
2009-01-01
Liquid chromatography-mass spectrometry (LC-MS) is now a routine technique with the development of electrospray ionisation (ESI) providing a simple and robust interface. It can be applied to a wide range of biological molecules and the use of tandem MS and stable isotope internal standards allows highly sensitive and accurate assays to be developed although some method optimisation is required to minimise ion suppression effects. Fast scanning speeds allow a high degree of multiplexing and many compounds can be measured in a single analytical run. With the development of more affordable and reliable instruments, LC-MS is starting to play an important role in several areas of clinical biochemistry and compete with conventional liquid chromatography and other techniques such as immunoassay. PMID:19224008
Liu, Jie; Thiel, Walter
2018-04-21
We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.
You can run, you can hide: The epidemiology and statistical mechanics of zombies
NASA Astrophysics Data System (ADS)
Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.
2015-11-01
We use a popular fictional disease, zombies, in order to introduce techniques used in modern epidemiology modeling, and ideas and techniques used in the numerical study of critical phenomena. We consider variants of zombie models, from fully connected continuous time dynamics to a full scale exact stochastic dynamic simulation of a zombie outbreak on the continental United States. Along the way, we offer a closed form analytical expression for the fully connected differential equation, and demonstrate that the single person per site two dimensional square lattice version of zombies lies in the percolation universality class. We end with a quantitative study of the full scale US outbreak, including the average susceptibility of different geographical regions.
A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.
2015-12-01
Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.
Svanström, Camilla; Hansson, Gunnar P; Svensson, Leif D; Sennbro, Carl Johan
2012-01-25
The metabolic conversion of midazolam (MDZ) to its main metabolite 1'-hydroxy-midazolam (1-OH-MDZ) can be used as a probe drug for cytochrome P450 3A (CYP3A) activity. A sensitive method for the simultaneous determination of MDZ and its metabolite 1-OH-MDZ in human plasma using supported liquid extraction (SLE) in combination with liquid chromatography-tandem mass spectrometry (LC-MS/MS) detection was developed and validated. Plasma samples (100 μL) were diluted with 0.5M NH(3) (aq) containing deuterated internal standards. The samples were extracted with ethyl acetate on a 96-well SLE-plate. Separation was performed on a Symmetry Shield RP18 column using an acidic gradient running from 2% to 95% methanol in 3 min. Detection was performed using a triple quadrupole mass spectrometer running in positive electrospray selected reaction monitoring (SRM) mode. The validated dynamic range was 0.2-100 nmol/L for both analytes. In the concentration range 0.6-75 nmol/L the extraction recoveries were in the ranges 91.2-98.6% and 94.5-98.3% for MDZ and 1-OH-MDZ, respectively. Matrix effects were more pronounced for MDZ than for 1-OH-MDZ but the response was still 75.4% or higher compared to a reference. The overall repeatability was within 2.2-7.6% for both analytes, the overall reproducibility was within 3.1-10.2% for both analytes and the overall accuracy bias was within -1.1 to 7.5% for both analytes. The method was successfully applied to determine the plasma concentrations of MDZ and 1-OH-MDZ in 14 healthy volunteers up to 24h after administration of a single oral dose of 2mg MDZ. The SLE technology was found to be convenient and suitable for sample preparation, and the developed method was found to be rapid, selective and reproducible for the simultaneous determination of MDZ and 1-OH-MDZ in human plasma. Copyright © 2011 Elsevier B.V. All rights reserved.
Laurito, Tiago L; Santagada, Vincenzo; Caliendo, Giuseppe; Oliveira, Celso H; Barrientos-Astigarraga, Rafael E; De Nucci, Gilberto
2002-04-01
A rapid, sensitive and specific method to quantify nevirapine in human plasma using dibenzepine as the internal standard (IS) was developed and validated. The method employed a liquid-liquid extraction. The analyte and the IS were chromatographed on a C(18) analytical column, (150 x 4.6 mm i.d. 4 microm) and analyzed by tandem mass spectrometry in the multiple reaction monitoring mode. The method had a chromatographic run time of 5.0 min and a linear calibration curve over the range 10-5000 ng ml(-1) (r(2) > 0.9970). The between-run precision, based on the relative standard deviation for replicate quality controls was 1.3% (30 ng ml(-1)), 2.8% (300 ng ml(-1)) and 3.6% (3000 ng ml(-1)). The between-run accuracy was 4.0, 7.0 and 6.2% for the above-mentioned concentrations, respectively. This method was employed in a bioequivalence study of two nevirapine tablet formulations (Nevirapina from Far-Manguinhos, Brazil, as a test formulation, and Viramune from Boehringer Ingelheim do Brasil Química e Farmacêutica, as a reference formulation) in 25 healthy volunteers of both sexes who received a single 200 mg dose of each formulation. The study was conducted using an open, randomized, two-period crossover design with a 3 week washout interval. The 90% confidence interval (CI) of the individual ratio geometric mean for Nevirapina/Viramune was 96.4-104.5% for AUC((0-last)), 91.4-105.1% for AUC((0-infinity)) and 95.3-111.6% for C(max) (AUC = area under the curve; C(max) = peak plasma concentration). Since both 90% CI for AUC((0-last)) and AUC((0-infinity)) and C(max) were included in the 80-125% interval proposed by the US Food and Drug Administration, Nevirapina was considered bioequivalent to Viramune according to both the rate and extent of absorption. Copyright 2002 John Wiley & Sons, Ltd.
Pleil, Joachim D; Angrish, Michelle M; Madden, Michael C
2015-12-11
Immunochemistry is an important clinical tool for indicating biological pathways leading towards disease. Standard enzyme-linked immunosorbent assays (ELISA) are labor intensive and lack sensitivity at low-level concentrations. Here we report on emerging technology implementing fully-automated ELISA capable of molecular level detection and describe application to exhaled breath condensate (EBC) samples. The Quanterix SIMOA HD-1 analyzer was evaluated for analytical performance for inflammatory cytokines (IL-6, TNF-α, IL-1β and IL-8). The system was challenged with human EBC representing the most dilute and analytically difficult of the biological media. Calibrations from synthetic samples and spiked EBC showed excellent linearity at trace levels (r(2) > 0.99). Sensitivities varied by analyte, but were robust from ~0.006 (IL-6) to ~0.01 (TNF-α) pg ml(-1). All analytes demonstrated response suppression when diluted with deionized water and so assay buffer diluent was found to be a better choice. Analytical runs required ~45 min setup time for loading samples, reagents, calibrants, etc., after which the instrument performs without further intervention for up to 288 separate samples. Currently, available kits are limited to single-plex analyses and so sample volumes require adjustments. Sample dilutions should be made with assay diluent to avoid response suppression. Automation performs seamlessly and data are automatically analyzed and reported in spreadsheet format. The internal 5-parameter logistic (pl) calibration model should be supplemented with a linear regression spline at the very lowest analyte levels, (<1.3 pg ml(-1)). The implementation of the automated Quanterix platform was successfully demonstrated using EBC, which poses the greatest challenge to ELISA due to limited sample volumes and low protein levels.
Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A
2000-06-01
After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.
Zarzycki, Paweł K; Slączka, Magdalena M; Zarzycka, Magdalena B; Bartoszuk, Małgorzata A; Włodarczyk, Elżbieta; Baran, Michał J
2011-11-01
This paper is a continuation of our previous research focusing on development of micro-TLC methodology under temperature-controlled conditions. The main goal of present paper is to demonstrate separation and detection capability of micro-TLC technique involving simple analytical protocols without multi-steps sample pre-purification. One of the advantages of planar chromatography over its column counterpart is that each TLC run can be performed using non-previously used stationary phase. Therefore, it is possible to fractionate or separate complex samples characterized by heavy biological matrix loading. In present studies components of interest, mainly steroids, were isolated from biological samples like fish bile using single pre-treatment steps involving direct organic liquid extraction and/or deproteinization by freeze-drying method. Low-molecular mass compounds with polarity ranging from estetrol to progesterone derived from the environmental samples (lake water, untreated and treated sewage waters) were concentrated using optimized solid-phase extraction (SPE). Specific bands patterns for samples derived from surface water of the Middle Pomerania in northern part of Poland can be easily observed on obtained micro-TLC chromatograms. This approach can be useful as simple and non-expensive complementary method for fast control and screening of treated sewage water discharged by the municipal wastewater treatment plants. Moreover, our experimental results show the potential of micro-TLC as an efficient tool for retention measurements of a wide range of steroids under reversed-phase (RP) chromatographic conditions. These data can be used for further optimalization of SPE or HPLC systems working under RP conditions. Furthermore, we also demonstrated that micro-TLC based analytical approach can be applied as an effective method for the internal standard (IS) substance search. Generally, described methodology can be applied for fast fractionation or screening of the whole range of target substances as well as chemo-taxonomic studies and fingerprinting of complex mixtures, which are present in biological or environmental samples. Due to low consumption of eluent (usually 0.3-1mL/run) mainly composed of water-alcohol binary mixtures, this method can be considered as environmentally friendly and green chemistry focused analytical tool, supplementary to analytical protocols involving column chromatography or planar micro-fluidic devices. Copyright © 2011 Elsevier Ltd. All rights reserved.
Crew appliance computer program manual, volume 1
NASA Technical Reports Server (NTRS)
Russell, D. J.
1975-01-01
Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.
Konda, Ravi Kumar; Chandu, Babu Rao; Challa, B.R.; Kothapalli, Chandrasekhar B.
2012-01-01
The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm) column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. PMID:29403764
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding
NASA Technical Reports Server (NTRS)
Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.
2014-01-01
The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.
Influence of Number of Contact Efforts on Running Performance During Game-Based Activities.
Johnston, Rich D; Gabbett, Tim J; Jenkins, David G
2015-09-01
To determine the influence the number of contact efforts during a single bout has on running intensity during game-based activities and assess relationships between physical qualities and distances covered in each game. Eighteen semiprofessional rugby league players (age 23.6 ± 2.8 y) competed in 3 off-side small-sided games (2 × 10-min halves) with a contact bout performed every 2 min. The rules of each game were identical except for the number of contact efforts performed in each bout. Players performed 1, 2, or 3 × 5-s wrestles in the single-, double-, and triple-contact game, respectively. The movement demands (including distance covered and intensity of exercise) in each game were monitored using global positioning system units. Bench-press and back-squat 1-repetition maximum and the 30-15 Intermittent Fitness Test (30-15IFT) assessed muscle strength and high-intensity-running ability, respectively. There was little change in distance covered during the single-contact game (ES = -0.16 to -0.61), whereas there were larger reductions in the double- (ES = -0.52 to -0.81) and triple-contact (ES = -0.50 to -1.15) games. Significant relationships (P < .05) were observed between 30-15IFT and high-speed running during the single- (r = .72) and double- (r = .75), but not triple-contact (r = .20) game. There is little change in running intensity when only single contacts are performed each bout; however, when multiple contacts are performed, greater reductions in running intensity result. In addition, high-intensity-running ability is only associated with running performance when contact demands are low.
Vass, Andrea; Robles-Molina, José; Pérez-Ortega, Patricia; Gilbert-López, Bienvenida; Dernovics, Mihaly; Molina-Díaz, Antonio; García-Reyes, Juan F
2016-07-01
The aim of the study was to evaluate the performance of different chromatographic approaches for the liquid chromatography/mass spectrometry (LC-MS(/MS)) determination of 24 highly polar pesticides. The studied compounds, which are in most cases unsuitable for conventional LC-MS(/MS) multiresidue methods were tested with nine different chromatographic conditions, including two different hydrophilic interaction liquid chromatography (HILIC) columns, two zwitterionic-type mixed-mode columns, three normal-phase columns operated in HILIC-mode (bare silica and two silica-based chemically bonded columns (cyano and amino)), and two standard reversed-phase C18 columns. Different sets of chromatographic parameters in positive (for 17 analytes) and negative ionization modes (for nine analytes) were examined. In order to compare the different approaches, a semi-quantitative classification was proposed, calculated as the percentage of an empirical performance value, which consisted of three main features: (i) capacity factor (k) to characterize analyte separation from the void, (ii) relative response factor, and (iii) peak shape based on analytes' peak width. While no single method was able to provide appropriate detection of all the 24 studied species in a single run, the best suited approach for the compounds ionized in positive mode was based on a UHPLC HILIC column with 1.8 μm particle size, providing appropriate results for 22 out of the 24 species tested. In contrast, the detection of glyphosate and aminomethylphosphonic acid could only be achieved with a zwitterionic-type mixed-mode column, which proved to be suitable only for the pesticides detected in negative ion mode. Finally, the selected approach (UHPLC HILIC) was found to be useful for the determination of multiple pesticides in oranges using HILIC-ESI-MS/MS, with limits of quantitation in the low microgram per kilogram in most cases. Graphical Abstract HILIC improves separation of multiclass polar pesticides.
Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Graß, Tobias
2012-03-01
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.
Scalability Analysis and Use of Compression at the Goddard DAAC and End-to-End MODIS Transfers
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.
1998-01-01
The goal of this task is to analyze the performance of single and multiple FTP transfer between SCF's and the Goddard DAAC. We developed an analytic model to compute the performance of FTP sessions as a function of various key parameters, implemented the model as a program called FTP Analyzer, and carried out validations with real data obtained by running single and multiple FTP transfer between GSFC and the Miami SCF. The input parameters to the model include the mix to FTP sessions (scenario), and for each FTP session, the file size. The network parameters include the round trip time, packet loss rate, the limiting bandwidth of the network connecting the SCF to a DAAC, TCP's basic timeout, TCP's Maximum Segment Size, and TCP's Maximum Receiver's Window Size. The modeling approach used consisted of modeling TCP's overall throughput, computing TCP's delay per FTP transfer, and then solving a queuing network model that includes the FTP clients and servers.
An evaluation of four single element airfoil analytic methods
NASA Technical Reports Server (NTRS)
Freuler, R. J.; Gregorek, G. M.
1979-01-01
A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.
Phyx: phylogenetic tools for unix.
Brown, Joseph W; Walker, Joseph F; Smith, Stephen A
2017-06-15
The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx : a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx. eebsmith@umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Phyx: phylogenetic tools for unix
Brown, Joseph W.; Walker, Joseph F.; Smith, Stephen A.
2017-01-01
Abstract Summary: The ease with which phylogenomic data can be generated has drastically escalated the computational burden for even routine phylogenetic investigations. To address this, we present phyx: a collection of programs written in C ++ to explore, manipulate, analyze and simulate phylogenetic objects (alignments, trees and MCMC logs). Modelled after Unix/GNU/Linux command line tools, individual programs perform a single task and operate on standard I/O streams that can be piped to quickly and easily form complex analytical pipelines. Because of the stream-centric paradigm, memory requirements are minimized (often only a single tree or sequence in memory at any instance), and hence phyx is capable of efficiently processing very large datasets. Availability and Implementation: phyx runs on POSIX-compliant operating systems. Source code, installation instructions, documentation and example files are freely available under the GNU General Public License at https://github.com/FePhyFoFum/phyx Contact: eebsmith@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28174903
Device with Functions of Linear Motor and Non-contact Power Collector for Wireless Drive
NASA Astrophysics Data System (ADS)
Fujii, Nobuo; Mizuma, Tsuyoshi
The authors propose a new apparatus with functions of propulsion and non-contact power collection for a future vehicle which can run like an electric vehicle supplied from the onboard battery source in most of the root except near stations. The batteries or power-capacitors are non-contact charged from the winding connected with commercial power on ground in the stations etc. The apparatus has both functions of linear motor and transformer, and the basic configuration is a wound-secondary type linear induction motor (LIM). In the paper, the wound type LIM with the concentrated single-phase winding for the primary member on the ground is dealt from the viewpoint of low cost arrangement. The secondary winding is changed to the single-phase connection for zero thrust in the transformer operation, and the two-phase connection for the linear motor respectively. The change of connection is done by the special converter for charge and linear drive on board. The characteristics are studied analytically.
Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study
NASA Astrophysics Data System (ADS)
Gaiginschi, L.; Agape, I.; Talif, S.
2017-10-01
Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study
Guillard, Olivier; Fauconneau, Bernard; Favreau, Frédéric; Marrauld, Annie; Pineau, Alain
2012-04-01
A local case report of hyperaluminemia (aluminum concentration: 3.88 µmol/L) in a woman using an aluminum-containing antiperspirant for 4 years raises the question of possible transdermal uptake of aluminum salt as a future public health problem. Prior to studying the transdermal uptake of three commercialized cosmetic formulas, an analytical assay of aluminum (Al) in chlorohydrate form (ACH) by Zeeman Electrothermal Atomic Absorption Spectrophotometer (ZEAAS) in a clean room was optimized and validated. This analysis was performed with different media on human skin using a Franz(™) diffusion cell. The detection and quantification limits were set at ≤ 3 µg/L. Precision analysis as within-run (n = 12) and between-run (n = 15-68 days) yield CV ≤ 6%. The high analytic sensitivity (2-3 µg/L) and low variability should allow an in vitro study of the transdermal uptake of ACH.
Wang, Yuanyuan; Li, Xiaowei; Zhang, Zhiwen; Ding, Shuangyang; Jiang, Haiyang; Li, Jiancheng; Shen, Jianzhong; Xia, Xi
2016-02-01
A sensitive, confirmatory ultra-high performance liquid chromatography-tandem mass spectrometric method was developed and validated to detect 23 veterinary drugs and metabolites (nitroimidazoles, benzimidazoles, and chloramphenicol components) in bovine milk. Compounds of interest were sequentially extracted from milk with acetonitrile and basified acetonitrile using sodium chloride to induce liquid-liquid partition. The extract was purified on a mixed mode solid-phase extraction cartridge. Using rapid polarity switching in electrospray ionization, a single injection was capable of detecting both positively and negatively charged analytes in a 9 min chromatography run time. Recoveries based on matrix-matched calibrations and isotope labeled internal standards for milk ranged from 51.7% to 101.8%. The detection limits and quantitation limits of the analytical method were found to be within the range of 2-20 ng/kg and 5-50 ng/kg, respectively. The recommended method is simple, specific, and reliable for the routine monitoring of nitroimidazoles, benzimidazoles, and chloramphenicol components in bovine milk samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gayda, J.; Srolovitz, D. J.
1987-01-01
A specialized, microstructural lattice model, termed MCFET for combined Monte Carlo Finite Element Technique, was developed which simulates microstructural evolution in material systems where modulated phases occur and the directionality of the modulation is influenced by internal and external stresses. In this approach, the microstructure is discretized onto a fine lattice. Each element in the lattice is labelled in accordance with its microstructural identity. Diffusion of material at elevated temperatures is simulated by allowing exchanges of neighboring elements if the exchange lowers the total energy of the system. A Monte Carlo approach is used to select the exchange site while the change in energy associated with stress fields is computed using a finite element technique. The MCFET analysis was validated by comparing this approach with a closed form, analytical method for stress assisted, shape changes of a single particle in an infinite matrix. Sample MCFET analytical for multiparticle problems were also run and in general the resulting microstructural changes associated with the application of an external stress are similar to that observed in Ni-Al-Cr alloys at elevated temperature.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
What Do We Teach and How Do We Teach It?
ERIC Educational Resources Information Center
Shapiro, Marilyn
Considering that some feminist critics have recently been approaching composition theory from a preconceived feminist perspective, the issue of maintaining an analytical bias while conducting research is once more emerging. By imposing an analytical model on a body of data, scholars run the risk of ignoring conclusions or focusing on those which…
Fassauer, Georg M; Hofstetter, Robert; Hasan, Mahmoud; Oswald, Stefan; Modeß, Christina; Siegmund, Werner; Link, Andreas
2017-11-30
Increasing evidence accumulates that metabolites of the dissociative anesthetic ketamine contribute considerably to the biological effects of this drug and could be developed as next generation antidepressants, especially for acute treatment of patients with therapy-refractory major depression. Analytical methods for the simultaneous determination of the plethora of hydroxylated, dehydrogenated and/or demethylated compounds formed after administration of ketamine hydrochloride are a prerequisite for future clinical investigations and a deeper understanding of the individual role of the isomers of these metabolites. In this study, we present development and validation of a method based on supercritical-fluid chromatography (SFC) coupled to single quadrupole MS detection that allows the separation of ketamine as well as all of its relevant metabolites detected in urine of healthy volunteers. Inherently to SFC methods, the run times of the novel protocol are four times shorter than in a comparable HPLC method, the use of organic solvents is reduced and we were able to demonstrate and validate the successful enantioselective separation and quantification of R- and S-ketamine, R- and S-norketamine, R- and S-dehydronorketamine and (2R,6R)- and (2S,6S)-hydroxynorketamine isomers differing in either constitution, stereochemistry, or both, in one run. The developed method may be useful in investigating the antidepressant efficacy of ketamine in clinical trials. Copyright © 2017 Elsevier B.V. All rights reserved.
Fasoula, S; Zisi, Ch; Sampsonidis, I; Virgiliou, Ch; Theodoridis, G; Gika, H; Nikitas, P; Pappa-Louisi, A
2015-03-27
In the present study a series of 45 metabolite standards belonging to four chemically similar metabolite classes (sugars, amino acids, nucleosides and nucleobases, and amines) was subjected to LC analysis on three HILIC columns under 21 different gradient conditions with the aim to explore whether the retention properties of these analytes are determined from the chemical group they belong. Two multivariate techniques, principal component analysis (PCA) and discriminant analysis (DA), were used for statistical evaluation of the chromatographic data and extraction similarities between chemically related compounds. The total variance explained by the first two principal components of PCA was found to be about 98%, whereas both statistical analyses indicated that all analytes are successfully grouped in four clusters of chemical structure based on the retention obtained in four or at least three chromatographic runs, which, however should be performed on two different HILIC columns. Moreover, leave-one-out cross-validation of the above retention data set showed that the chemical group in which an analyte belongs can be 95.6% correctly predicted when the analyte is subjected to LC analysis under the same four or three experimental conditions as the all set of analytes was run beforehand. That, in turn, may assist with disambiguation of analyte identification in complex biological extracts. Copyright © 2015 Elsevier B.V. All rights reserved.
Measurement and Modeling of Fugitive Dust from Off Road DoD Activities
2017-12-08
each soil and vehicle type (see Table 2). Note, no tracked vehicles were run at YTC. CT is the curve track sampling location, CR is the curve ridge...Soil is SL = sandy loam. ...................... 116 Figure 35. Single-event Wind Erosion Evaluation Program (SWEEP) Run example results. ... 121...Figure 36. Single-event Wind Erosion Evaluation Program (SWEEP) Threshold Run example results screen
Zhao, Jianxing
2015-03-01
A high-performance liquid chromatography with ultraviolet detection method has been developed for the simultaneous determination of a set of reliable markers of renal function, including creatinine, uric acid, kynurenine and tryptophan in plasma. Separation was achieved by an Agilent HC-C18 (2) analytical column. Gradient elution and programmed wavelength detection allowed the method to be used to analyze these compounds by just one injection. The total run time was 25 min with all peaks of interest being eluted within 13 min. Good linear responses were found with correlation coefficient >0.999 for all analytes within the concentration range of the relevant levels. The recovery was: creatinine, 101 ± 1%; uric acid, 94.9 ± 3.7%; kynurenine, 100 ± 2%; and tryptophan, 92.6 ± 2.9%. Coefficients of variation within-run and between-run of all analytes were ≤2.4%. The limit of detection of the method was: creatinine, 0.1 µmol/L; uric acid, 0.05 µmol/L; kynurenine, 0.02 µmol/L; and tryptophan, 1 µmol/L. The developed method could be employed as a useful tool for the detection of chronic kidney disease, even at an early stage. Copyright © 2014 John Wiley & Sons, Ltd.
Characteristics of process oils from HTI coal/plastics co-liquefaction runs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robbins, G.A.; Brandes, S.D.; Winschel, R.A.
1995-12-31
The objective of this project is to provide timely analytical support to DOE`s liquefaction development effort. Specific objectives of the work reported here are presented. During a few operating periods of Run POC-2, HTI co-liquefied mixed plastics with coal, and tire rubber with coal. Although steady-state operation was not achieved during these brief tests periods, the results indicated that a liquefaction plant could operate with these waste materials as feedstocks. CONSOL analyzed 65 process stream samples from coal-only and coal/waste portions of the run. Some results obtained from characterization of samples from Run POC-2 coal/plastics operation are presented.
Towards Synthesis and Usage of Actinide-Bearing REE Phosphate age Standards: A Progress Report
NASA Astrophysics Data System (ADS)
Pyle, J. M.; Cherniak, D. J.
2006-05-01
Electron microprobe (EMP) dates result from a concentration-time unit conversion, so use of a concentration- based (rather than isotope-ratio based) fictive age standard is warranted. This observation has motivated our mineral synthesis program, aimed at producing actinide-doped REE phosphate EMP dating standards that meet the following criteria: 1) known concentrations of U, Th, and Pb; 2) homogeneous intragrain distribution of all components; 3) of suitable size, either as a single-crystal or polycrystalline sintered ceramic. Single-crystal synthesis of actinide-doped LaPO4 by flux-growth methods results in disproportionation of lanthanide and flux, alkali, and actinide components into phosphate and oxide phases, respectively, and flux- growth methods were abandoned. Actinide-doped La phosphate is successfully prepared by high-T annealing and hydrothermal processing of microcrystalline phosphate; both homogeneity and charge-balance of (Ca, Th, Pb)-bearing LaPO4 increase with increasing solvent acidity during cold-seal hydrothermal synthesis. A combination of pressing and high-T (1400° C) sintering transforms fine-grained (0.1-10 μm) run- products to ceramic pellets with 90-95% theoretical density. Our most recent runs focused on a target composition of La80(CaTh)17(CaU)2(PbTh)1PO4 processed with 6% 2M HCl at 820° C, 0.75 kbar for 1 week. The run products are 0.1-2 μm crystals identified by XRD as La-actinide phosphate solid solution. 2 μm grains (N=16) give a composition (mean±2 sd) of La79.77(1.26)(CaTh)17.87(1.00)(CaU)1.53(0.42)(PbTh)0.82(0.09)PO4. Th (8.07-9.13 wt. %) is homogeneous at the level of analytical precision, and the Pb concentration range (3500-4350 ppm) is restricted relative to untreated precipitate. Uranium concentration values are more variable (6500-10000 ppm). This run yields a fictive age of 702±4 Ma (mean±2 se), compared to the fictive age of 794 Ma for the target composition.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo
2011-01-01
In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
Single Common Powertrain Lubricant Development
2012-01-01
2 2.2 ENGINE DURABILITY TESTING...Page Figure 1 – General Engine Products 6.5L(T) Test Cell Installation ............................................... 9 Figure 2 ... 2 Run 3 Repeatability Run - 1 Repeatability Run - 2 Repeatability Run - 3 3-Run Average Engine Oil Consumption [lb/hr] 0.061 0.082 0.086 0.076
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Long-Term Stability of Volatile Nitrosamines in Human Urine.
Hodgson, James A; Seyler, Tiffany H; Wang, Lanqing
2016-07-01
Volatile nitrosamines (VNAs) are established teratogens and carcinogens in animals and classified as probable (group 2A) and possible (group 2B) carcinogens in humans by the IARC. High levels of VNAs have been detected in tobacco products and in both mainstream and sidestream smoke. VNA exposure may lead to lipid peroxidation and oxidative stress (e.g., inflammation), chronic diseases (e.g., diabetes) and neurodegenerative diseases (e.g., Alzheimer's disease). To conduct epidemiological studies on the effects of VNA exposure, short-term and long-term stabilities of VNAs in the urine matrix are needed. In this report, the stability of six VNAs (N-nitrosodimethylamine, N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitrosopiperidine, N-nitrosopyrrolidine and N-nitrosomorpholine) in human urine is analyzed for the first time using in vitro blank urine pools fortified with a standard mixture of all six VNAs. Over a 24-day period, analytes were monitored in samples stored at ∼20°C (collection temperature), 4-10°C (transit temperature) and -20 and -70°C (long-term storage temperatures). All six analytes were stable for 24 days at all temperatures (n = 15). The analytes were then analyzed over a longer time period at -70°C; all analytes were stable for up to 1 year (n = 62). A subset of 44 samples was prepared as a single batch and stored at -20°C, the temperature at which prepared samples are stored. These prepared samples were run in duplicate weekly over 10 weeks, and all six analytes were stable over the entire period (n = 22). Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA CF6 jet engine diagnostics program: Long-term CF6-6D low-pressure turbine deterioration
NASA Technical Reports Server (NTRS)
Smith, J. J.
1979-01-01
Back-to-back performance tests were run on seven airline low pressure turbine (LPT) modules and four new CF6-6D modules. Back-to-back test cell runs, in which an airline LPT module was directly compared to a new production module, were included. The resulting change, measured in fuel burn, equaled the level of LPT module deterioration. Three of the LPT modules were analytically inspected followed by a back-to-back test cell run to evaluate current refurbishment techniques.
Auditing of chromatographic data.
Mabie, J T
1998-01-01
During a data audit, it is important to ensure that there is clear documentation and an audit trail. The Quality Assurance Unit should review all areas, including the laboratory, during the conduct of the sample analyses. The analytical methodology that is developed should be documented prior to sample analyses. This is an important document for the auditor, as it is the instrumental piece used by the laboratory personnel to maintain integrity throughout the process. It is expected that this document will give insight into the sample analysis, run controls, run sequencing, instrument parameters, and acceptance criteria for the samples. The sample analysis and all supporting documentation should be audited in conjunction with this written analytical method and any supporting Standard Operating Procedures to ensure the quality and integrity of the data.
Hassell, Kerry M; LeBlanc, Yves; McLuckey, Scott A
2009-11-01
Charge inversion ion/ion reactions can convert several cation types associated with a single analyte molecule to a single anion type for subsequent mass analysis. Specifically, analyte ions present with one of a variety of cationizing agents, such as an excess proton, excess sodium ion, or excess potassium ion, can all be converted to the deprotonated molecule, provided that a stable anion can be generated for the analyte. Multiply deprotonated species that are capable of exchanging a proton for a metal ion serve as the reagent anions for the reaction. This process is demonstrated here for warfarin and for a glutathione conjugate. Examples for several other glutathione conjugates are provided as supplementary material to demonstrate the generality of the reaction. In the case of glutathione conjugates, multiple metal ions can be associated with the singly-charged analyte due to the presence of two carboxylate groups. The charge inversion reaction involves the removal of the excess cationizing agent, as well as any metal ions associated with anionic groups to yield a singly deprotonated analyte molecule. The ability to convert multiple cation types to a single anion type is analytically desirable in cases in which the analyte signal is distributed among several cation types, as is common in the electrospray ionization of solutions with relatively high salt contents. For analyte species that undergo efficient charge inversion, such as glutathione conjugates, there is the additional potential advantage for significantly improved signal-to-noise ratios when species that give rise to 'chemical noise' in the positive ion spectrum do not undergo efficient charge inversion.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Ates, E; Mittendorf, K; Stroka, J; Senyuva, H
2013-01-01
An automated method involving on-line clean-up and analytical separation in a single run using TurboFlow™ reversed phase liquid chromatography coupled to a high resolution mass spectrometer has been developed for the simultaneous determination of deoxynivalenol, T2 toxin, HT2 toxin, zearalenone and fumonisins B1 and B2 in maize, wheat and animal feed. Detection was performed in full scan mode at a resolution of R = 100,000 full width at half maximum with high energy collision cell dissociation for the determination of fragment ions with a mass accuracy below 5 ppm. The extract from homogenised samples, after blending with a 0.1% aqueous mixture of 0.1% formic acid/acetonitrile (43:57) for 45 min, was injected directly onto the TurboFlow™ (TLX) column for automated on-line clean-up followed by analytical separation and accurate mass detection. The TurboFlow™ column enabled specific binding of target mycotoxins, whereas higher molecular weight compounds, like fats, proteins and other interferences with different chemical properties, were removed to waste. Single laboratory method validation was performed by spiking blank materials with mycotoxin standards. The recovery and repeatability was determined by spiking at three concentration levels (50, 100 and 200% of legislative limits) with six replicates. Average recovery, relative standard deviation and intermediate precision values were 71 to 120%, 1 to 19% and 4 to 19%, respectively. The method accuracy was confirmed with certified reference materials and participation in proficiency testing.
Optimal chemotaxis in intermittent migration of animal cells
NASA Astrophysics Data System (ADS)
Romanczuk, P.; Salbreux, G.
2015-04-01
Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.
Rossum, Huub H van; Kemperman, Hans
2017-07-26
General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.
Ifa, D R; Moraes, M E; Moraes, M O; Santagada, V; Caliendo, G; de Nucci, G
2000-03-01
A liquid chromatographic atmospheric pressure chemical ionization tandem mass spectrometric method is described for the determination of 21-hydroxydeflazacort in human plasma using dexamethasone 21-acetate as an internal standard. The procedure requires a single diethyl ether extraction. After evaporation of the solvent under a nitrogen flow, the analytes are reconstituted in the mobile phase, chromatographed on a C18 reversed-phase column and analyzed by mass spectrometry via a heated nebulizer interface where they are detected by multiple reaction monitoring. The method has a chromatographic run time of less than 5 min and a linear calibration curve with a range of 1-400 ng ml(-1) (r>0.999). The between-run precision, based on the relative standard deviation for replicate quality controls, was < or =5.5% (10 ng ml(-1)), 1.0% (50 ng ml(-1)) and 2.7% (200 ng ml(-1)). The between-run accuracy was +/-7.1, 3.8 and 4.8% for the above concentrations, respectively. This method was employed in a bioequivalence study of two DFZ tablet formulations (Denacen from Marjan Industria e Comercio, Brazil, as a test formulation, and Calcort from Merrell Lepetit, Brazil, as a reference formulation) in 24 healthy volunteers of both sexes who received a single 30 mg dose of each formulation. The study was conducted using an open, randomized, two-period crossover design with a 7-day washout interval. The 90% confidence interval (CI) of the individual geometric mean ratio for Denacen/Calcort was 89.8-109.5% for area under the curve AUC(0-24 h) and 80.7-98.5% for Cmax. Since both the 90% CI for AUC(0-24 h) and Cmax were included in the 80-125% interval proposed by the US Food and Drug Administration, Denacen was considered bioequivalent to Calcort according to both the rate and extent of absorption.
Emerging Cyber Infrastructure for NASA's Large-Scale Climate Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Spear, C.; Bowen, M. K.; Thompson, J. H.; Hu, F.; Yang, C. P.; Pierce, D.
2016-12-01
The resolution of NASA climate and weather simulations have grown dramatically over the past few years with the highest-fidelity models reaching down to 1.5 KM global resolutions. With each doubling of the resolution, the resulting data sets grow by a factor of eight in size. As the climate and weather models push the envelope even further, a new infrastructure to store data and provide large-scale data analytics is necessary. The NASA Center for Climate Simulation (NCCS) has deployed the Data Analytics Storage Service (DASS) that combines scalable storage with the ability to perform in-situ analytics. Within this system, large, commonly used data sets are stored in a POSIX file system (write once/read many); examples of data stored include Landsat, MERRA2, observing system simulation experiments, and high-resolution downscaled reanalysis. The total size of this repository is on the order of 15 petabytes of storage. In addition to the POSIX file system, the NCCS has deployed file system connectors to enable emerging analytics built on top of the Hadoop File System (HDFS) to run on the same storage servers within the DASS. Coupled with a custom spatiotemporal indexing approach, users can now run emerging analytical operations built on MapReduce and Spark on the same data files stored within the POSIX file system without having to make additional copies. This presentation will discuss the architecture of this system and present benchmark performance measurements from traditional TeraSort and Wordcount to large-scale climate analytical operations on NetCDF data.
FAPT: A Mathematica package for calculations in QCD Fractional Analytic Perturbation Theory
NASA Astrophysics Data System (ADS)
Bakulev, Alexander P.; Khandramai, Vyacheslav L.
2013-01-01
We provide here all the procedures in Mathematica which are needed for the computation of the analytic images of the strong coupling constant powers in Minkowski (A(s;nf) and Aνglob(s)) and Euclidean (A(Q2;nf) and Aνglob(Q2)) domains at arbitrary energy scales (s and Q2, correspondingly) for both schemes — with fixed number of active flavours nf=3,4,5,6 and the global one with taking into account all heavy-quark thresholds. These singularity-free couplings are inevitable elements of Analytic Perturbation Theory (APT) in QCD, proposed in [10,69,70], and its generalization — Fractional APT, suggested in [42,46,43], needed to apply the APT imperative for renormalization-group improved hadronic observables. Program summaryProgram title: FAPT Catalogue identifier: AENJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1985 No. of bytes in distributed program, including test data, etc.: 1895776 Distribution format: tar.gz Programming language: Mathematica. Computer: Any work-station or PC where Mathematica is running. Operating system: Windows XP, Mathematica (versions 5 and 7). Classification: 11.5. Nature of problem: The values of analytic images A(Q2) and A(s) of the QCD running coupling powers αsν(Q2) in Euclidean and Minkowski regions, correspondingly, are determined through the spectral representation in the QCD Analytic Perturbation Theory (APT). In the program FAPT we collect all relevant formulas and various procedures which allow for a convenient evaluation of A(Q2) and A(s) using numerical integrations of the relevant spectral densities. Solution method: FAPT uses Mathematica functions to calculate different spectral densities and then performs numerical integration of these spectral integrals to obtain analytic images of different objects. Restrictions: It could be that for an unphysical choice of the input parameters the results are without any meaning. Running time: For all operations the run time does not exceed a few seconds. Usually numerical integration is not fast, so that we advise the use of arrays of precalculated data and then to apply the routine Interpolate(as shown in supplied example of the program usage, namely in the notebook FAPT_Interp.nb).
The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.
2015-12-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.
Consumer Search, Rationing Rules, and the Consequence for Competition
NASA Astrophysics Data System (ADS)
Ruebeck, Christopher S.
Firms' conjectures about demand are consequential in oligopoly games. Through agent-based modeling of consumers' search for products, we can study the rationing of demand between capacity-constrained firms offering homogeneous products and explore the robustness of analytically solvable models' results. After algorithmically formalizing short-run search behavior rather than assuming a long-run average, this study predicts stronger competition in a two-stage capacity-price game.
Additive Manufacturing Thermal Performance Testing of Single Channel GRCop-84 SLM Components
NASA Technical Reports Server (NTRS)
Garcia, Chance P.; Cross, Matthew
2014-01-01
The surface finish found on components manufactured by sinter laser manufacturing (SLM) is rougher (0.013 - 0.0006 inches) than parts made using traditional fabrication methods. Internal features and passages built into SLM components do not readily allow for roughness reduction processes. Alternatively, engineering literature suggests that the roughness of a surface can enhance thermal performance within a pressure drop regime. To further investigate the thermal performance of SLM fabricated pieces, several GRCop-84 SLM single channel components were tested using a thermal conduction rig at MSFC. A 20 kW power source running at 25% duty cycle and 25% power level applied heat to each component while varying water flow rates between 2.1 - 6.2 gallons/min (GPM) at a supply pressure of 550 to 700 psi. Each test was allowed to reach quasi-steady state conditions where pressure, temperature, and thermal imaging data were recorded. Presented in this work are the heat transfer responses compared to a traditional machined OHFC Copper test section. An analytical thermal model was constructed to anchor theoretical models with the empirical data.
DOT National Transportation Integrated Search
2012-09-01
The Center for Health and Safety Culture conducted research for the Idaho Transportation Department to develop media messages and tools to reduce fatalities and serious injuries related to Run-Off-the-Road, single-vehicle crashes in Idaho using the P...
Matsutomo, Toshiaki; Kodera, Yukihiro
2016-02-01
Garlic and its processed preparations contain numerous sulfur compounds that are difficult to analyze in a single run using HPLC. The aim of this study was to develop a rapid and convenient sulfur-specific HPLC method to analyze sulfur compounds in aged garlic extract (AGE). We modified a conventional postcolumn HPLC method by employing a hexaiodoplatinate reagent. Identification and structural analysis of sulfur compounds were conducted by LC-mass spectrometry (LC-MS) and nuclear magnetic resonance. The production mechanisms of cis-S-1-propenylcysteine (cis-S1PC) and S-allylmercaptocysteine (SAMC) were examined by model reactions. Our method has the following advantages: less interference from nonsulfur compounds, high sensitivity, good correlation coefficients (r > 0.98), and high resolution that can separate >20 sulfur compounds, including several isomers, in garlic preparations in a single run. This method was adapted for LC-MS analysis. We identified cis-S1PC and γ-glutamyl-S-allyl-mercaptocysteine in AGE. The results of model reactions suggest that cis-S1PC is produced from trans-S1PC through an isomerization reaction and that SAMC is produced by a reaction involving S-allylcysteine/S1PC and diallyldisulfide during the aging period. We developed a rapid postcolumn HPLC method for both qualitative and quantitative analyses of sulfur compounds, and this method helped elucidate a potential mechanism of cis-S1PC and SAMC action in AGE. © 2016 American Society for Nutrition.
Ciofi, L; Ancillotti, C; Chiuminatto, U; Fibbi, D; Checchini, L; Orlandini, S; Del Bubba, M
2014-10-03
Four different pellicular stationary phases (i.e. octadecylsilane, octasilane, Phenyl-Hexyl and pentafluorophenyl) were investigated for the chromatographic resolution of alkylphenols (APs), alkylphenols polyethoxylates (APnEOs) and alkylphenoxy carboxylates (APECs) using mixtures of water and organic solvents (i.e. methanol, acetonitrile and tetrahydrofuran) as eluents, in order to obtain their determination by a single LC-MS/MS run. In fact, alkylphenols and alkylphenoxy carboxylates must be analysed in negative ion mode, whereas alkylphenols polyethoxylates undergo ionisation only in positive ion mode, and therefore, two distinct LC-MS/MS analysis are commonly adopted. The best resolution among the aforementioned target analytes was achieved on the pentafluorophenyl column, eluting with an acidified water-acetonitrile-tetrahydrofuran mixture and using the post column addition of an ammonia solution in methanol for the detection of positively ionisable compounds. Under these optimized chromatographic conditions the investigated compounds were determined via a single chromatographic run, with only one polarity switch, in 15min, achieving the following instrumental detection limits: 600pg for AP1EOs, 0.8-14pg for AP2EOs, 10.4-150pg for APs and 4.4-4.8pg for APECs. The chromatographic method was coupled with solid-phase extraction and clean-up procedures and successfully applied to the analysis of wastewater and surface water samples, highlighting mean concentration ranging from 6ng/L for 4-t-OP1EC to 1434ng/L for 4-NP1121EC, depending on the sample analysed. Copyright © 2014 Elsevier B.V. All rights reserved.
Abd El-Hady, D; Albishri, H M
2015-07-01
Two novel sensors based on human serum albumin (HSA)-ionic liquid (IL) and bovine serum albumin (BSA)-ionic liquid (IL) composites modified glassy carbon electrode (GCE) were produced for simultaneous determination of water soluble vitamins B2, B6 and C in human plasma following analytes focusing by IL micelles collapse (AFILMC). For selective and efficient extraction, vitamins were dissolved in 3.0molL(-1) micellar solution of 1-octyl-3-methyl imidazolium bromide IL. The extracted vitamins were hydrodynamically injected by 25mbar for 20s into a running buffer of 12.5mmolL(-1) phosphate at pH 6.0 followed by electrochemical detection (ECD) on protein/1-octyl-3-methyl imidazolium hexafluorophosphate IL/GC sensors. The chemical stability of proposed sensors was achieved up to 7 days without any decomposition of PF6-based IL/protein and adsorption of interfering ions. In the current work, the sensitivity enhancement factor (SEF) up to 5000-fold was achieved using the AFILMC/ECD setup compared to conventional CE/UV. Under optimal conditions, linear calibration graphs were obtained from 0.5, 0.5 and 1.0 to 1500.0µgmL(-1) of vitamins B2, B6 and C, respectively. Detection limits of analytes were ranged from 180.0 to 520.0ngmL(-1). The proposed AFILMC/ECD setup was successfully applied to the assay of trace level quantification of vitamins in human plasma samples and also their binding constants with HSA and BSA were determined. The concurrent use of IL micelles for the proposed separation and detection processes exhibited some advantages, such as, a reduction of use toxic solvents, an efficient extraction and a direct injection of samples with a short-single run. Furthermore, IL micelles, having variable possibility of interactions, facilitated the successful achievements of AFILMC/ECD setup for the quantification of vitamins in plasma matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Chu, Zhu-Yin; Li, Chao-Feng; Chen, Zhi; Xu, Jun-Jie; Di, Yan-Kun; Guo, Jing-Hui
2015-09-01
We present a novel method for high precision measurement of (186)Os/(188)Os and (187)Os/(188)Os ratios, applying isobaric oxide interference correction based on in-run measurements of oxygen isotopic ratios. For this purpose, we set up a static data collection routine to measure the main Os(16)O3(-) ion beams with Faraday cups connected to conventional 10(11) amplifiers, and (192)Os(16)O2(17)O(-) and (192)Os(16)O2(18)O(-) ion beams with Faraday cups connected to 10(12) amplifiers. Because of the limited number of Faraday cups, we did not measure (184)Os(16)O3(-) and (189)Os(16)O3(-) simultaneously in-run, but the analytical setup had no significant influence on final (186)Os/(188)Os and (187)Os/(188)Os data. By analyzing UMd, DROsS, an in-house Os solution standard, and several rock reference materials, including WPR-1, WMS-1a, and Gpt-5, the in-run measured oxygen isotopic ratios were proven to present accurate Os isotopic data. However, (186)Os/(188)Os and (187)Os/(188)Os data obtained with in-run O isotopic compositions for the solution standards and rock reference materials show minimal improvement in internal and external precision, compared to the conventional oxygen correction method. We concluded that, the small variations of oxygen isotopes during OsO3(-) analytical sessions are probably not the main source of error for high precision Os isotopic analysis. Nevertheless, use of run-specific O isotopic compositions is still a better choice for Os isotopic data reduction and eliminates the requirement of extra measurements of the oxygen isotopic ratios.
Analytical solutions for efficient interpretation of single-well push-pull tracer tests
Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations descr...
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.
Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas
2016-01-01
A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).
History of Satellite Orbit Determination at NSWCDD
2018-01-31
run . Segment 40 did pass editing and its use was optional after Segment 20. Segment 30 needed to be run before Segment 80. Segment 70 was run as...control cards required to run the program. These included a CHARGE card related to usage charges and various REQUEST, ATTACH, and CATALOG cards...each) could be done in a single run after the long-arc solution had converged. These short arcs used the pass matrices from the long-arc run in their
AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent
Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less
NASA Astrophysics Data System (ADS)
Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon
Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.
Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.
Rudašová, Marína; Masár, Marián
2016-01-01
A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CMB constraints on running non-Gaussianity
NASA Astrophysics Data System (ADS)
Oppizzi, F.; Liguori, M.; Renzi, A.; Arroja, F.; Bartolo, N.
2018-05-01
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the fNL running spectral index, nNG, using WMAP 9-year data. Our final bounds (68% C.L.) read ‑0.6< nNG<1.4}, ‑0.3< nNG<1.2, ‑1.1
Microbial Fuels Cell-Based Biosensor for Toxicity Detection: A Review
Zhou, Tuoyu; Han, Huawen; Liu, Pu; Xiong, Jian; Tian, Fake; Li, Xiangkai
2017-01-01
With the unprecedented deterioration of environmental quality, rapid recognition of toxic compounds is paramount for performing in situ real-time monitoring. Although several analytical techniques based on electrochemistry or biosensors have been developed for the detection of toxic compounds, most of them are time-consuming, inaccurate, or cumbersome for practical applications. More recently, microbial fuel cell (MFC)-based biosensors have drawn increasing interest due to their sustainability and cost-effectiveness, with applications ranging from the monitoring of anaerobic digestion process parameters (VFA) to water quality detection (e.g., COD, BOD). When a MFC runs under correct conditions, the voltage generated is correlated with the amount of a given substrate. Based on this linear relationship, several studies have demonstrated that MFC-based biosensors could detect heavy metals such as copper, chromium, or zinc, as well as organic compounds, including p-nitrophenol (PNP), formaldehyde and levofloxacin. Both bacterial consortia and single strains can be used to develop MFC-based biosensors. Biosensors with single strains show several advantages over systems integrating bacterial consortia, such as selectivity and stability. One of the limitations of such sensors is that the detection range usually exceeds the actual pollution level. Therefore, improving their sensitivity is the most important for widespread application. Nonetheless, MFC-based biosensors represent a promising approach towards single pollutant detection. PMID:28956857
Microbial Fuels Cell-Based Biosensor for Toxicity Detection: A Review.
Zhou, Tuoyu; Han, Huawen; Liu, Pu; Xiong, Jian; Tian, Fake; Li, Xiangkai
2017-09-28
With the unprecedented deterioration of environmental quality, rapid recognition of toxic compounds is paramount for performing in situ real-time monitoring. Although several analytical techniques based on electrochemistry or biosensors have been developed for the detection of toxic compounds, most of them are time-consuming, inaccurate, or cumbersome for practical applications. More recently, microbial fuel cell (MFC)-based biosensors have drawn increasing interest due to their sustainability and cost-effectiveness, with applications ranging from the monitoring of anaerobic digestion process parameters (VFA) to water quality detection (e.g., COD, BOD). When a MFC runs under correct conditions, the voltage generated is correlated with the amount of a given substrate. Based on this linear relationship, several studies have demonstrated that MFC-based biosensors could detect heavy metals such as copper, chromium, or zinc, as well as organic compounds, including p -nitrophenol (PNP), formaldehyde and levofloxacin. Both bacterial consortia and single strains can be used to develop MFC-based biosensors. Biosensors with single strains show several advantages over systems integrating bacterial consortia, such as selectivity and stability. One of the limitations of such sensors is that the detection range usually exceeds the actual pollution level. Therefore, improving their sensitivity is the most important for widespread application. Nonetheless, MFC-based biosensors represent a promising approach towards single pollutant detection.
Zhang, Dan; Park, Jin-A; Kim, Seong-Kwan; Cho, Sang-Hyun; Jeong, Daun; Cho, Soo-Min; Yi, Hee; Shim, Jae-Han; Kim, Jin-Suk; Abd El-Aty, A M; Shin, Ho-Chul
2016-02-15
A simple analytical method based on liquid chromatography coupled with triple-quadrupole mass spectrometry was developed for detection of the veterinary drugs flumethasone, dl-methylephedrine, and 2-hydroxy-4,6-dimethylpyrimidine in porcine muscle and pasteurized cow milk. The target drugs were extracted from samples using 10mM ammonium formate in acetonitrile followed by clean-up with n-hexane and primary secondary amine sorbent (PSA). The analytes were separated on an XBridge™ hydrophilic interaction liquid chromatography (HILIC) column using 10mM ammonium formate in ultrapure water and acetonitrile. Good linearity was achieved over the tested concentrations in matrix-fortified calibrations with correlation coefficients (R(2))≥0.9686. Recovery at two spiking levels ranged between 73.62-112.70% with intra- and inter-day precisions of ≤20.33%. The limits of quantification ranged from 2-10ng/g in porcine muscle and pasteurized cow milk. A survey of market samples showed that none of them contained any of the target analytes. Liquid-liquid purification using n-hexane in combination with PSA efficiently removed the interferences during porcine and milk sample extraction. The developed method is sensitive and reliable for detection of the three target drugs in a single chromatographic run. Furthermore, it exhibits high selectivity and low quantification limits for animal-derived food products destined for human consumption. Copyright © 2016 Elsevier B.V. All rights reserved.
Singh, R P; Sabarinath, S; Gautam, N; Gupta, R C; Singh, S K
2009-07-15
The present manuscript describes development and validation of LC-MS/MS assay for the simultaneous quantitation of 97/78 and its active in-vivo metabolite 97/63 in monkey plasma using alpha-arteether as internal standard (IS). The method involves a single step protein precipitation using acetonitrile as extraction method. The analytes were separated on a Columbus C(18) (50 mm x 2 mm i.d., 5 microm particle size) column by isocratic elution with acetonitrile:ammonium acetate buffer (pH 4, 10 mM) (80:20 v/v) at a flow rate of 0.45 mL/min, and analyzed by mass spectrometry in multiple reaction-monitoring (MRM) positive ion mode. The chromatographic run time was 4.0 min and the weighted (1/x(2)) calibration curves were linear over a range of 1.56-200 ng/mL. The method was linear for both the analytes with correlation coefficients >0.995. The intra-day and inter-day accuracy (% bias) and precisions (% RSD) of the assay were less than 6.27%. Both analytes were stable after three freeze-thaw cycles (% deviation <8.2) and also for 30 days in plasma (% deviation <6.7). The absolute recoveries of 97/78, 97/63 and internal standard (IS), from spiked plasma samples were >90%. The validated assay method, described here, was successfully applied to the pharmacokinetic study of 97/78 and its active in-vivo metabolite 97/63 in Rhesus monkeys.
Wang, Tianming; Ding, Liqing; Jin, Huajia; Shi, Rong; Li, Yuanyuan; Wu, Jiasheng; Li, Yifei; Zhu, Li; Ma, Yueming
2016-08-01
A sensitive, specific, accurate HPLC-MS/MS method was developed and validated for the simultaneous quantification of catechin, epicatechin, liquiritin, isoliquiritin, liquiritigenin, isoliquiritigenin, piperine and glycyrrhetinic acid from Longhu Rendan pills in rat plasma. Chromatographic separation was performed with a Hypersil Gold C18 column using a gradient of methanol and 0.01% acetic acid containing 0.2 mm ammonium acetate as mobile phase. The analytes were quantified on a triple quadrupole mass spectrometer, operating in selected reaction monitoring mode and switching the electrospray ion source polarity between positive and negative modes in a single run. The calibration curves of catechin, epicatechin, liquiritin, isoliquiritin, liquiritigenin, isoliquiritigenin, piperine and glycyrrhetinic acid were linear over the concentration ranges of 5-2000, 5-2000, 0.5-200, 0.5-200, 0.25-100, 0.25-100, 0.025-10 and 0.50-200 ng mL(-1) , respectively. The intra- and inter-assay precisions and accuracies were <11.6 and 91.9-108.2%, respectively, for all analytes. Matrix effects for all analytes were between 88.2 and 114.2%. Stability testing showed that all analytes were stable in plasma at 24 °C for 3 h, at 4 °C for 24 h, after three freeze-thaw cycles, and at -80 °C for 15 days. The method was successfully applied to an in vivo study evaluating the pharmacokinetics of multiple nonvolatile compounds following intragastric administration of Longhu Rendan pills to rats. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modelling vortex-induced fluid-structure interaction.
Benaroya, Haym; Gabbai, Rene D
2008-04-13
The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.
SolarPILOT | Concentrating Solar Power | NREL
tools. Unlike exclusively ray-tracing tools, SolarPILOT runs the analytical simulation engine that uses engine alongside a ray-tracing core for more detailed simulations. The SolTrace simulation engine is
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Suzuki, Yoshihiro; Endo, Yoko; Ogawa, Masanori; Yamamoto, Shinobu; Takeuchi, Akito; Nakagawa, Tomoo; Onda, Nobuhiko
2009-11-01
N-methyl-2-pyrrolidone (NMP) has been used in many industries and biological monitoring of NMP exposure is preferred to atmospheric monitoring in occupational health. We developed an analytical method that did not include solid phase extraction (SPE) but utilized deuterium-labeled compounds as internal standard for high-performance liquid chromatography-electrospray ionization-mass spectrometry using a C30 column. Urinary concentrations of NMP and its known metabolites 5-hydoxy-N-methyl-2-pyrrolidone (5-HNMP), N-methyl-succinimide (MSI), and 2-hydroxy-N-methylsuccinimide (2-HMSI) were determined in a single run. The method provided baseline separation of these compounds. Their limits of detection in 10-fold diluted urine were 0.0001, 0.006, 0.008, and 0.03 mg/L, respectively. Linear calibration covered a biological exposure index (BEI) for urinary concentration. The within-run and total precisions (CV, %) were 5.6% and 9.2% for NMP, 3.4% and 4.2% for 5-HNMP, 3.7% and 6.0% for MSI, and 6.5% and 6.9% for 2-HMSI. The method was evaluated using international external quality assessment samples, and urine samples from workers exposed to NMP in an occupational area.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Observational constraints on loop quantum cosmology.
Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji
2011-11-18
In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.
Sierra/Aria 4.48 Verification Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal Fluid Development Team
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
$ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe
McGee, James J.
1983-01-01
$ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.
Karamanidis, Kiros; Arampatzis, Adamantios
2007-01-01
The goals of this study were to investigate whether the lower muscle-tendon units (MTUs) capacities in older affect their ability to recover balance with a single-step after a fall, and to examine whether running experience enhances and protects this motor skill in young and old adults. The investigation was conducted on 30 older and 19 younger divided into two subgroups: runners versus non-active. In previous studies we documented that the older had lower leg extensor muscle strength and tendon stiffness while running had no effect on MTUs capacities. The current study examined recovery mechanics of the same individuals after an induced forward fall. Younger were better able to recover balance with a single-step compared to older (P < 0.001); this ability was associated with a more effective body configuration at touchdown (more posterior COM position relative to the recovery foot, P <0.001). MTUs capacities classified 88.6% of the subjects into single- or multiple-steppers. Runners showed a superior ability to recover balance with a single-step (P < 0.001) compared to non-active subjects due to a more effective mechanical response during the stance phase (greater knee joint flexion, P <0.05). We concluded that the age-related degeneration of the MTUs significantly diminished the older adults' ability to restore balance with a single-step. Running seems to enhance and protect this motor skill. We suggested that runners, due to their running experience, could update the internal representation of mechanisms responsible for the control of dynamic stability during a forward fall and, thus, were able to restore balance more often with a single-step compared to the non-active subjects.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Avila, Mónica; Zougagh, Mohammed; Escarpa, Alberto; Ríos, Angel
2009-10-01
A new strategy based on the fast separation of the fingerprint markers of Vanilla planifolia extracts and vanilla-related samples on microfluidic-electrochemistry chip is proposed. This methodology allowed the detection of all required markers for confirmation of common frauds in this field. The elution order was strategically connected with sequential sample screening and analyte confirmation steps, where first ethyl vanillin was detected to distinguish natural from adultered samples; second, vanillin as prominent marker in V. planifolia, but frequently added in its synthetic form; and third, the final detection of the fingerprint markers (p-hydroxybenzaldehyde, vanillic acid, and p-hydroxybenzoic acid) of V. planifolia with confirmation purposes. The reliability of the proposed methodology was demonstrated in the confirmation the natural or non-natural origin of vanilla in samples using V. planifolia extracts and other selected food samples containing this flavor.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek
2016-01-15
In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.
Macwan, Joyce S; Ionita, Ileana A; Akhlaghi, Fatemeh
2012-01-01
A simple and sensitive assay was developed and validated for the simultaneous quantification of rosuvastatin acid (RST), rosuvastatin-5S-lactone (RST-LAC), and N-desmethyl rosuvastatin (DM-RST), in buffered human plasma using liquid chromatography-tandem mass spectrometry (LC-MS/MS). All the three analytes and the corresponding deuterium-labeled (d6) internal standards were extracted from 50 μL of buffered human plasma by protein precipitation. The analytes were chromatographically separated using a Zorbax-SB Phenyl column (2.1 mm × 100 mm, 3.5 μm). The mobile phase comprised of a gradient mixture of 0.1% v/v glacial acetic acid in 10% v/v methanol in water (solvent A) and 40% v/v methanol in acetonitrile (solvent B). The analytes were separated at baseline within 6.0 min using a flow rate of 0.35 mL/min. Mass spectrometry detection was carried out in positive electrospray ionization mode. The calibration curves for all three analytes were linear (R ≥ 0.9964, n = 3) over the concentration range of 0.1-100 ng/mL for RST and RST-LAC, and 0.5-100 ng/mL for DM-RST. Mean extraction recoveries ranged within 88.0-106%. Intra- and inter-run mean percent accuracy were within 91.8-111% and percent imprecision was ≤15%. Stability studies revealed that all the analytes were stable in matrix during bench-top (6 h on ice-water slurry), at the end of three successive freeze and thaw cycles and at -80°C for 1 month. The method was successfully applied in a clinical study to determine the concentrations of RST and the lactone metabolite over 12-h post-dose in patients who received a single dose of rosuvastatin.
The Convergence of High Performance Computing and Large Scale Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.
2015-12-01
As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.
Use of multiple colorimetric indicators for paper-based microfluidic devices.
Dungchai, Wijitar; Chailapakul, Orawon; Henry, Charles S
2010-08-03
We report here the use of multiple indicators for a single analyte for paper-based microfluidic devices (microPAD) in an effort to improve the ability to visually discriminate between analyte concentrations. In existing microPADs, a single dye system is used for the measurement of a single analyte. In our approach, devices are designed to simultaneously quantify analytes using multiple indicators for each analyte improving the accuracy of the assay. The use of multiple indicators for a single analyte allows for different indicator colors to be generated at different analyte concentration ranges as well as increasing the ability to better visually discriminate colors. The principle of our devices is based on the oxidation of indicators by hydrogen peroxide produced by oxidase enzymes specific for each analyte. Each indicator reacts at different peroxide concentrations and therefore analyte concentrations, giving an extended range of operation. To demonstrate the utility of our approach, the mixture of 4-aminoantipyrine and 3,5-dichloro-2-hydroxy-benzenesulfonic acid, o-dianisidine dihydrochloride, potassium iodide, acid black, and acid yellow were chosen as the indicators for simultaneous semi-quantitative measurement of glucose, lactate, and uric acid on a microPAD. Our approach was successfully applied to quantify glucose (0.5-20 mM), lactate (1-25 mM), and uric acid (0.1-7 mM) in clinically relevant ranges. The determination of glucose, lactate, and uric acid in control serum and urine samples was also performed to demonstrate the applicability of this device for biological sample analysis. Finally results for the multi-indicator and single indicator system were compared using untrained readers to demonstrate the improvements in accuracy achieved with the new system. 2010 Elsevier B.V. All rights reserved.
Müllerová, Ludmila; Dubský, Pavel; Gaš, Bohuslav
2015-03-06
Interactions among analyte forms that undergo simultaneous dissociation/protonation and complexation with multiple selectors take the shape of a highly interconnected multi-equilibrium scheme. This makes it difficult to express the effective mobility of the analyte in these systems, which are often encountered in electrophoretical separations, unless a generalized model is introduced. In the first part of this series, we presented the theory of electromigration of a multivalent weakly acidic/basic/amphoteric analyte undergoing complexation with a mixture of an arbitrary number of selectors. In this work we demonstrate the validity of this concept experimentally. The theory leads to three useful perspectives, each of which is closely related to the one originally formulated for simpler systems. If pH, IS and the selector mixture composition are all kept constant, the system is treated as if only a single analyte form interacted with a single selector. If the pH changes at constant IS and mixture composition, the already well-established models of a weakly acidic/basic analyte interacting with a single selector can be employed. Varying the mixture composition at constant IS and pH leads to a situation where virtually a single analyte form interacts with a mixture of selectors. We show how to switch between the three perspectives in practice and confirm that they can be employed interchangeably according to the specific needs by measurements performed in single- and dual-selector systems at a pH where the analyte is fully dissociated, partly dissociated or fully protonated. Weak monoprotic analyte (R-flurbiprofen) and two selectors (native β-cyclodextrin and monovalent positively charged 6-monodeoxy-6-monoamino-β-cyclodextrin) serve as a model system. Copyright © 2015 Elsevier B.V. All rights reserved.
Xu, Liang; Cui, Pengfei; Wang, Dongmei; Tang, Cheng; Dong, Linyi; Zhang, Can; Duan, Hongquan; Yang, Victor C
2014-01-03
In this study, poly(glycidyl methacrylate) (PGMA) nanoparticles (NPs) were prepared and chemically immobilized for the first time onto a capillary inner wall for open tubular capillary electrochromatography (OTCEC). The immobilization of PGMA NPs onto the capillary was attained by a ring-opening reaction between the NPs and an amino-silylated fused capillary inner surface. Scanning electron micrographs clearly demonstrated that the NPs were bound to the capillary inner surface in a dense monolayer. The PGMA NP-coated column was then functionalized by lysine (Lys). After fuctionalization, the capillary can afford strong anodic electroosmotic flow, especially in acidic running buffers. Separations of three amino acids (including tryptophan, tyrosine and phenylalanine) were performed in NP-modified, monolayer Lys-functionalized and bare uncoated capillaries. Results indicated that the NP-coated column can provide more retention and higher resolution for analytes due to the hydrophobic interaction between analytes and the NP-coating. Run-to-run and column-to-column reproducibilities in the separation of the amino acids using the NP-modified column were also demonstrated. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.
Du, Yingfeng; Liu, Pengwei; Zhu, Hong; Shi, Xiaowei; Zhao, Chengcheng; Wang, Na; Zhang, Lantong
2011-10-28
A simple and sensitive LC-MS/MS method has been developed and validated for the identification and quantification of epinodosin, epinodosinol, nodosin, oridonin, lasiokariurinol, lasiokaurin and rabdoternin A in rat plasma using sulfamethoxazole as the internal standard. The plasma sample pre-treatment consisted of a liquid-liquid extraction. Chromatographic separation was achieved on a C18 column with linear gradient elution using water and methanol, which were both acidified with 0.1% formic acid, at a flow rate of 0.7 mL/min. A tandem mass spectrometric detection was conducted using multiple reaction monitoring (MRM) via an electrospray ionization (ESI) source. A novel multi-determination-periods program was executed to achieve a higher sensitivity by setting five scanning periods. The method presented here utilizes a novel determination strategy, enabling the application of positive and negative ESI-MS in a single run. The optimized mass transition ion-pairs (m/z) for quantitation were 361.2/287.1 for epinodosin, 382.3/347.3 for epinodosinol, 363.3/281.2 for nodosin, 365.3/347.3 for oridonin, 407.3/329.1 for lasiokariurinol, 405.2/59.0 for lasiokaurin, 363.2/283.1 for rabdoternin A and 254.1/156.0 for IS. The total run time was 20.50 min (including 5 min equilibration time) between injections. The specificity, linearity, accuracy, precision, recovery, matrix effect and several validation results demonstrate that this method is sensitive, specific and reliable. The proposed method was further applied to investigate the pharmacokinetics of all analytes after a single oral administration of Isodon serra extract to rats. Copyright © 2011 Elsevier B.V. All rights reserved.
Kawai, Takayuki; Sueyoshi, Kenji; Kitagawa, Fumihiko; Otsuka, Koji
2010-08-01
The applicability of an online preconcentration technique, large-volume sample stacking with an electroosmotic flow pump (LVSEP), to microchip zone electrophoresis (MCZE) for the analysis of oligosaccharides was investigated. Since the sample stacking and separation proceeded continuously without polarity switching in LVSEP, a single "straight" channel microchip could be employed. In the MCZE analysis of oligosaccharides, sample adsorption onto the channel surface should be suppressed, so the straight microchannel was modified with poly(vinyl alcohol) (PVA). So far, the mechanism of LVSEP in the polymer-coated capillary or microchannel has not been reported, and thus, the LVSEP process in the PVA-coated channel was investigated by fluorescence imaging. Although it is well-known that the PVA coating can suppress the electroosmotic flow (EOF), an enhanced EOF with a mobility of 4.4 x 10(-4) cm(2)/(V x s) was observed in a low ionic strength sample solution. It was revealed that such temporarily enhanced EOF in the sample zone worked as the driving force to remove the sample matrix in LVSEP. To evaluate the analytical performance of LVSEP-MCZE, oligosaccharides were analyzed in the PVA-coated straight channel. As a result, both the glucose ladder and oligosaccharides obtained from bovine ribonuclease B were well enriched and separated with up to 2200-2900-fold sensitivity enhancement compared to those in a conventional MCZE analysis. The run-to-run repeatabilities of the migration time and peak height were good with relative standard deviations of 1.1% and 7.2%, respectively, which were better than those of normal MCZE. By applying the LVSEP technique to MCZE, a complicated voltage program for fluidic control could be simplified from four channels for two steps to two channels for one step.
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Whelan, Michelle; Kinsella, Brian; Furey, Ambrose; Moloney, Mary; Cantwell, Helen; Lehotay, Steven J; Danaher, Martin
2010-07-02
A new UHPLC-MS/MS (ultra high performance liquid chromatography coupled to tandem mass spectrometry) method was developed and validated to detect 38 anthelmintic drug residues, consisting of benzimidazoles, avermectins and flukicides. A modified QuEChERS-type extraction method was developed with an added concentration step to detect most of the analytes at <1 microg kg(-1) levels in milk. Anthelmintic residues were extracted into acetonitrile using magnesium sulphate and sodium chloride to induce liquid-liquid partitioning followed by dispersive solid phase extraction for cleanup. The extract was concentrated into dimethyl sulphoxide, which was used as a keeper to ensure analytes remain in solution. Using rapid polarity switching in electrospray ionisation, a single injection was capable of detecting both positively and negatively charged ions in a 13 min run time. The method was validated at two levels: the unapproved use level and at the maximum residue level (MRL) according to Commission Decision (CD) 2002/657/EC criteria. The decision limit (CCalpha) of the method was in the range of 0.14-1.9 and 11-123 microg kg(-1) for drugs validated at unapproved and MRL levels, respectively. The performance of the method was successfully verified for benzimidazoles and levamisole by participating in a proficiency study.
Bogialli, Sara; Bortolini, Claudio; Di Gangi, Iole Maria; Di Gregorio, Federica Nigro; Lucentini, Luca; Favaro, Gabriella; Pastore, Paolo
2017-08-01
A comprehensive risk management on human exposure to cyanotoxins, whose production is actually unpredictable, is limited by reliable analytical tools for monitoring as many toxic algal metabolites as possible. Two analytical approaches based on a LC-QTOF system for target analysis and suspect screening of cyanotoxins in freshwater were presented. A database with 369 compounds belonging to cyanobacterial metabolites was developed and used for a retrospective data analysis based on high resolution mass spectrometry (HRMS). HRMS fragmentation of the suspect cyanotoxin precursor ions was subsequently performed for correctly identifying the specific variants. Alternatively, an automatic tandem HRMS analysis tailored for cyanotoxins was performed in a single chromatographic run, using the developed database as a preferred precursor ions list. Twenty-five extracts of surface and drinking waters contaminated by cyanobacteria were processed. The identification of seven uncommon microcystins (M(O)R, MC-FR, MSer 7 -YR, D-Asp 3 MSer 7 -LR, MSer 7 -LR, dmAdda-LR and dmAdda-YR) and 6 anabaenopeptins (A, B, F, MM850, MM864, oscyllamide Y) was reported. Several isobaric variants, fully separated by chromatography, were pointed out. The developed methods are proposed to be used by environmental and health agencies for strengthening the surveillance monitoring of cyanotoxins in water. Copyright © 2017 Elsevier B.V. All rights reserved.
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2010 CFR
2010-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2011 CFR
2011-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2013 CFR
2013-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2014 CFR
2014-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
29 CFR 1910.305 - Wiring methods, components, and equipment for general use.
Code of Federal Regulations, 2012 CFR
2012-07-01
... distribution center. (B) Conductors shall be run as multiconductor cord or cable assemblies. However, if... persons, feeders may be run as single insulated conductors. (v) The following requirements apply to branch... shall be multiconductor cord or cable assemblies or open conductors. If run as open conductors, they...
NASA Technical Reports Server (NTRS)
Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
Miller, Tyler M; Geraci, Lisa
2016-05-01
People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.
Renormalization Group scale-setting in astrophysical systems
NASA Astrophysics Data System (ADS)
Domazet, Silvije; Štefančić, Hrvoje
2011-09-01
A more general scale-setting procedure for General Relativity with Renormalization Group corrections is proposed. Theoretical aspects of the scale-setting procedure and the interpretation of the Renormalization Group running scale are discussed. The procedure is elaborated for several highly symmetric systems with matter in the form of an ideal fluid and for two models of running of the Newton coupling and the cosmological term. For a static spherically symmetric system with the matter obeying the polytropic equation of state the running scale-setting is performed analytically. The obtained result for the running scale matches the Ansatz introduced in a recent paper by Rodrigues, Letelier and Shapiro which provides an excellent explanation of rotation curves for a number of galaxies. A systematic explanation of the galaxy rotation curves using the scale-setting procedure introduced in this Letter is identified as an important future goal.
Pellegrino Vidal, Rocío B; Ibañez, Gabriela A; Escandar, Graciela M
2017-03-07
For the first time, liquid chromatography-diode array detection (LC-DAD) and liquid-chromatography fluorescence detection (LC-FLD) second-order data, collected in a single chromatographic run, were fused and chemometrically processed for the quantitation of coeluting analytes. Two different experimental mixtures composed of fluorescent and nonfluorescent endocrine disruptors were analyzed. Adequate pretreatment of the matrices before their fusion was crucial to attain reliable results. Multivariate curve resolution-alternating least-squares (MCR-ALS) was applied to LC-DAD, LC-FLD, and fused LC-DAD-FLD data. Although different degrees of improvement are observed when comparing the fused matrix results in relation to those obtained using a single detector, clear benefits of data fusion are demonstrated through: (1) the obtained limits of detection in the ranges 2.1-24 ng mL -1 and 0.9-6.3 ng mL -1 for the two evaluated systems and (2) the low relative prediction errors, below 7% in all cases, indicating good recoveries and precision. The feasibility of fusing data and its advantages in the analysis of real samples was successfully assessed through the study of spiked tap, underground, and river water samples.
Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.
Caro, J Jaime
2016-07-01
Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.
Colloidal Mechanisms of Gold Nanoparticle Loss in Asymmetric Flow Field-Flow Fractionation.
Jochem, Aljosha-Rakim; Ankah, Genesis Ngwa; Meyer, Lars-Arne; Elsenberg, Stephan; Johann, Christoph; Kraus, Tobias
2016-10-07
Flow field-flow fractionation is a powerful method for the analysis of nanoparticle size distributions, but its widespread use has been hampered by large analyte losses, especially of metal nanoparticles. Here, we report on the colloidal mechanisms underlying the losses. We systematically studied gold nanoparticles (AuNPs) during asymmetrical flow field-flow fractionation (AF4) by systematic variation of the particle properties and the eluent composition. Recoveries of AuNPs (core diameter 12 nm) stabilized by citrate or polyethylene glycol (PEG) at different ionic strengths were determined. We used online UV-vis detection and off-line elementary analysis to follow particle losses during full analysis runs, runs without cross-flow, and runs with parts of the instrument bypassed. The combination allowed us to calculate relative and absolute analyte losses at different stages of the analytic protocol. We found different loss mechanisms depending on the ligand. Citrate-stabilized particles degraded during analysis and suffered large losses (up to 74%). PEG-stabilized particles had smaller relative losses at moderate ionic strengths (1-20%) that depended on PEG length. Long PEGs at higher ionic strengths (≥5 mM) caused particle loss due to bridging adsorption at the membrane. Bulk agglomeration was not a relevant loss mechanism at low ionic strengths ≤5 mM for any of the studied particles. An unexpectedly large fraction of particles was lost at tubing and other internal surfaces. We propose that the colloidal mechanisms observed here are relevant loss mechanisms in many particle analysis protocols and discuss strategies to avoid them.
Viidanoja, Jyrki
2015-02-27
A new method for quantification of short chain C1-C6 carboxylic acids in vegetable oils and fats by employing Liquid Chromatography Mass Spectrometry (LC-MS) has been developed. The method requires minor sample preparation and applies non-conventional Electrospray Ionization (ESI) liquid phase chemistry. Samples are first dissolved in chloroform and then extracted using water that has been spiked with stable isotope labeled internal standards that are used for signal normalization and absolute quantification of selected acids. The analytes are separated using Ion Exclusion Chromatography (IEC) and detected with Electrospray Ionization Mass Spectrometry (ESI-MS) as deprotonated molecules. Prior to ionization the eluent that contains hydrochloric acid is modified post-column to ensure good ionization efficiency of the analytes. The averaged within run precision and between run precision were generally lower than 8%. The accuracy was between 85 and 115% for most of the analytes. The Lower Limit of Quantification (LLOQ) ranged from 0.006 to 7mg/kg. It is shown that this method offers good selectivity in cases where UV detection fails to produce reliable results. Copyright © 2015 Elsevier B.V. All rights reserved.
Eddhif, Balkis; Allavena, Audrey; Liu, Sylvie; Ribette, Thomas; Abou Mrad, Ninette; Chiavassa, Thierry; d'Hendecourt, Louis Le Sergeant; Sternberg, Robert; Danger, Gregoire; Geffroy-Rodier, Claude; Poinot, Pauline
2018-03-01
The present work aims at developing two LC-HRMS setups for the screening of organic matter in astrophysical samples. Their analytical development has been demonstrated on a 100-µg residue coming from the photo-thermo chemical processing of a cometary ice analog produced in laboratory. The first 1D-LC-HRMS setup combines a serially coupled columns configuration with HRMS detection. It has allowed to discriminate among different chemical families (amino acids, sugars, nucleobases and oligopeptides) in only one chromatographic run without neither a priori acid hydrolysis nor chemical derivatisation. The second setup is a dual-LC configuration which connects a series of trapping columns with analytical reverse-phase columns. By coupling on-line these two distinct LC units with a HRMS detection, high mass compounds (350
[Automated analyzer of enzyme immunoassay].
Osawa, S
1995-09-01
Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.
Xiao, Jie; Wang, Tianyang; Li, Pei; Liu, Ran; Li, Qing; Bi, Kaishun
2016-08-15
A sensitive, reliable and accurate UHPLC-MS/MS method has been firstly established and validated for the simultaneous quantification of ginkgo flavonoids, terpene lactones and nimodipine in rat plasma after oral administration of Ginkgo biloba dispersible tablets, Nimodipine tablets and the combination of the both, respectively. The plasma samples were extracted by two step liquid-liquid extraction, nimodipine was extracted by hexane-ether (3:1, v/v) at the first step, after that ginkgo flavonoids and terpene lactones were extracted by ethyl acetate. Then the analytes were successfully separated by running gradient elution with the mobile phase consisting of 0.1% formic acid in water and methanol at a flow rate of 0.6mL/min. The detection of the analytes was performed on a UHPLC-MS/MS system with turbo ion spray source in the negative ion and multiple reaction monitoring (MRM) mode. The calibration curves for the determination of all the analytes showed good linearity (R(2)>0.99), and the lower limits of quantification were 0.50-4.00ng/mL. Intra-day and inter-day precisions were in the range of 3.6%-9.2% and 3.2%-13.1% for all the analytes. The mean extraction recoveries of the analytes were within 69.82%-103.5% and the matrix were within 82.8%-110.0%. The validated method had been successfully applied to compare the pharmacokinetic parameters of ginkgo flavonoids, terpene lactones and nimodipine in rat plasma after oral administration of Ginkgo biloba dispersible tablets, Nimodipine tablets with the combination of the both. There were no statistically significant differences on the pharmacokinetic behaviors of all the analytes between the combined and single administration groups. Results showed that the combination of the two agents may avoid dosage adjustments in clinic and the combination is more convenient as well as efficient on different pathogenesis of cerebral ischemia. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Long, Junjiajia; Zucker, Steven W.; Emonet, Thierry
The capability to navigate environmental gradients is of critical importance for survival. Countless organisms (microbes, human cells, worms, larvae, and insects) as well as human-made robots use a run-and-tumble strategy to do so. The classical drawback of this approach is that runs in the wrong direction are wasteful. We show analytically that organisms can overcome this fundamental limitation by exploiting the non-normal dynamics and intrinsic nonlinearities inherent to the positive feedback between motion and sensation. Most importantly, this nonlinear amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a ``ratchet-like'' gradient climbing behavior with drift speeds that can approach half the maximum run speed of the organism. By extending the theoretical study of run-and-tumble navigation into the non-mean-field, nonlinear, and non-normal domains, our results provide a new level of understanding about this basic strategy. We thank Yale HPC, NIGMS 1R01GM106189, and the Allen Distinguished Investigator Program through The Paul G. Allen Frontiers Group for support.
Experimental evaluation of tool run-out in micro milling
NASA Astrophysics Data System (ADS)
Attanasio, Aldo; Ceretti, Elisabetta
2018-05-01
This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.
A power function profile of a ski jumping in-run hill.
Zanevskyy, Ihor
2011-01-01
The aim of the research was to find a function of the curvilinear segment profile which could make possible to avoid an instantaneous increasing of a curvature and to replace a circle arc segment on the in-run of a ski jump without any correction of the angles of inclination and the length of the straight-line segments. The methods of analytical geometry and trigonometry were used to calculate an optimal in-run hill profile. There were two fundamental conditions of the model: smooth borders between a curvilinear segment and straight-line segments of an in-run hill and concave of the curvilinear segment. Within the framework of this model, the problem has been solved with a reasonable precision. Four functions of a curvilinear segment profile of the in-run hill were investigated: circle arc, inclined quadratic parabola, inclined cubic parabola, and power function. The application of a power function to the in-run profile satisfies equal conditions for replacing a circle arc segment. Geometrical parameters of 38 modern ski jumps were investigated using the methods proposed.
Defensive Swarm: An Agent Based Modeling Analysis
2017-12-01
INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43 1. Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances
Bressel, Eadric; Louder, Talin J; Hoover, James P; Roberts, Luke C; Dolny, Dennis G
2017-11-01
The aim of this study was to determine if selected kinematic measures (foot strike index [SI], knee contact angle and overstride angle) were different between aquatic treadmill (ATM) and land treadmill (LTM) running, and to determine if these measures were altered during LTM running as a result of 6 weeks of ATM training. Acute effects were tested using 15 competitive distance runners who completed 1 session of running on each treadmill type at 5 different running speeds. Subsequently, three recreational runners completed 6 weeks of ATM training following a single-subject baseline, intervention and withdrawal experiment. Kinematic measures were quantified from digitisation of video. Regardless of speed, SI values during ATM running (61.3 ± 17%) were significantly greater (P = 0.002) than LTM running (42.7 ± 23%). Training on the ATM did not change (pre/post) the SI (26 ± 3.2/27 ± 3.1), knee contact angle (165 ± 0.3/164 ± 0.8) or overstride angle (89 ± 0.4/89 ± 0.1) during LTM running. Although SI values were different between acute ATM and LTM running, 6 weeks of ATM training did not appear to alter LTM running kinematics as evidenced by no change in kinematic values from baseline to post intervention assessments.
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen
2006-10-16
A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.
Statistical error in simulations of Poisson processes: Example of diffusion in solids
NASA Astrophysics Data System (ADS)
Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.
2016-08-01
Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.
Infrared Spectroscopy as a Chemical Fingerprinting Tool
NASA Technical Reports Server (NTRS)
Huff, Timothy L.
2003-01-01
Infrared (IR) spectroscopy is a powerful analytical tool in the chemical fingerprinting of materials. Any sample material that will interact with infrared light produces a spectrum and, although normally associated with organic materials, inorganic compounds may also be infrared active. The technique is rapid, reproducible and usually non-invasive to the sample. That it is non-invasive allows for additional characterization of the original material using other analytical techniques including thermal analysis and RAMAN spectroscopic techniques. With the appropriate accessories, the technique can be used to examine samples in liquid, solid or gas phase. Both aqueous and non-aqueous free-flowing solutions can be analyzed, as can viscous liquids such as heavy oils and greases. Solid samples of varying sizes and shapes may also be examined and with the addition of microscopic IR (microspectroscopy) capabilities, minute materials such as single fibers and threads may be analyzed. With the addition of appropriate software, microspectroscopy can be used for automated discrete point or compositional surface area mapping, with the latter providing a means to record changes in the chemical composition of a material surface over a defined area. Due to the ability to characterize gaseous samples, IR spectroscopy can also be coupled with thermal processes such as thermogravimetric (TG) analyses to provide both thermal and chemical data in a single run. In this configuration, solids (or liquids) heated in a TG analyzer undergo decomposition, with the evolving gases directed into the IR spectrometer. Thus, information is provided on the thermal properties of a material and the order in which its chemical constituents are broken down during incremental heating. Specific examples of these varied applications will be cited, with data interpretation and method limitations further discussed.
Meta-analytic framework for liquid association.
Wang, Lin; Liu, Silvia; Ding, Ying; Yuan, Shin-Sheng; Ho, Yen-Yi; Tseng, George C
2017-07-15
Although coexpression analysis via pair-wise expression correlation is popularly used to elucidate gene-gene interactions at the whole-genome scale, many complicated multi-gene regulations require more advanced detection methods. Liquid association (LA) is a powerful tool to detect the dynamic correlation of two gene variables depending on the expression level of a third variable (LA scouting gene). LA detection from single transcriptomic study, however, is often unstable and not generalizable due to cohort bias, biological variation and limited sample size. With the rapid development of microarray and NGS technology, LA analysis combining multiple gene expression studies can provide more accurate and stable results. In this article, we proposed two meta-analytic approaches for LA analysis (MetaLA and MetaMLA) to combine multiple transcriptomic studies. To compensate demanding computing, we also proposed a two-step fast screening algorithm for more efficient genome-wide screening: bootstrap filtering and sign filtering. We applied the methods to five Saccharomyces cerevisiae datasets related to environmental changes. The fast screening algorithm reduced 98% of running time. When compared with single study analysis, MetaLA and MetaMLA provided stronger detection signal and more consistent and stable results. The top triplets are highly enriched in fundamental biological processes related to environmental changes. Our method can help biologists understand underlying regulatory mechanisms under different environmental exposure or disease states. A MetaLA R package, data and code for this article are available at http://tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
New operator assistance features in the CMS Run Control System
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
2017-10-01
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
New Operator Assistance Features in the CMS Run Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Dual Optical Comb LWIR Source and Sensor
2017-10-12
Figure 39. Locking loop only controls one parameter, whereas there are two free- running parameters to control...optical frequency, along with a 12 point running average (black) equivalent to a 4 cm -1 resolution. .............................. 52 Figure 65...and processed on a single epitaxial substrate. Each OFC will be electrically driven and free- running (requiring no optical locking mechanisms). This
Schuck, Peter; Gillis, Richard B.; Besong, Tabot M.D.; Almutairi, Fahad; Adams, Gary G.; Rowe, Arthur J.; Harding, Stephen E.
2014-01-01
Sedimentation equilibrium (analytical ultracentrifugation) is one of the most inherently suitable methods for the determination of average molecular weights and molecular weight distributions of polymers, because of its absolute basis (no conformation assumptions) and inherent fractionation ability (without the need for columns or membranes and associated assumptions over inertness). With modern instrumentation it is also possible to run up to 21 samples simultaneously in a single run. Its application has been severely hampered because of difficulties in terms of baseline determination (incorporating estimation of the concentration at the air/solution meniscus) and complexity of the analysis procedures. We describe a new method for baseline determination based on a smart-smoothing principle and built into the highly popular platform SEDFIT for the analysis of the sedimentation behavior of natural and synthetic polymer materials. The SEDFIT-MSTAR procedure – which takes only a few minutes to perform - is tested with four synthetic data sets (including a significantly non-ideal system) a naturally occurring protein (human IgG1) and two naturally occurring carbohydrate polymers (pullulan and λ–carrageenan) in terms of (i) weight average molecular weight for the whole distribution of species in the sample (ii) the variation in “point” average molecular weight with local concentration in the ultracentrifuge cell and (iii) molecular weight distribution. PMID:24244936
NASA Technical Reports Server (NTRS)
White, P. R.; Little, R. R.
1985-01-01
A research effort was undertaken to develop personal computer based software for vibrational analysis. The software was developed to analytically determine the natural frequencies and mode shapes for the uncoupled lateral vibrations of the blade and counterweight assemblies used in a single bladed wind turbine. The uncoupled vibration analysis was performed in both the flapwise and chordwise directions for static rotor conditions. The effects of rotation on the uncoupled flapwise vibration of the blade and counterweight assemblies were evaluated for various rotor speeds up to 90 rpm. The theory, used in the vibration analysis codes, is based on a lumped mass formulation for the blade and counterweight assemblies. The codes are general so that other designs can be readily analyzed. The input for the codes is generally interactive to facilitate usage. The output of the codes is both tabular and graphical. Listings of the codes are provided. Predicted natural frequencies of the first several modes show reasonable agreement with experimental results. The analysis codes were originally developed on a DEC PDP 11/34 minicomputer and then downloaded and modified to run on an ITT XTRA personal computer. Studies conducted to evaluate the efficiency of running the programs on a personal computer as compared with the minicomputer indicated that, with the proper combination of hardware and software options, the efficiency of using a personal computer exceeds that of a minicomputer.
NASA Astrophysics Data System (ADS)
Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere
2006-02-01
To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.
Analyte discrimination from chemiresistor response kinetics.
Read, Douglas H; Martin, James E
2010-08-15
Chemiresistors are polymer-based sensors that transduce the sorption of a volatile organic compound into a resistance change. Like other polymer-based gas sensors that function through sorption, chemiresistors can be selective for analytes on the basis of the affinity of the analyte for the polymer. However, a single sensor cannot, in and of itself, discriminate between analytes, since a small concentration of an analyte that has a high affinity for the polymer might give the same response as a high concentration of another analyte with a low affinity. In this paper we use a field-structured chemiresistor to demonstrate that its response kinetics can be used to discriminate between analytes, even between those that have identical chemical affinities for the polymer phase of the sensor. The response kinetics is shown to be independent of the analyte concentration, and thus the magnitude of the sensor response, but is found to vary inversely with the analyte's saturation vapor pressure. Saturation vapor pressures often vary greatly from analyte to analyte, so analysis of the response kinetics offers a powerful method for obtaining analyte discrimination from a single sensor.
Containment of composite fan blades
NASA Technical Reports Server (NTRS)
Stotler, C. L.; Coppa, A. P.
1979-01-01
A lightweight containment was developed for turbofan engine fan blades. Subscale ballistic-type tests were first run on a number of concepts. The most promising configuration was selected and further evaluated by larger scale tests in a rotating test rig. Weight savings made possible by the use of this new containment system were determined and extrapolated to a CF6-size engine. An analytical technique was also developed to predict the released blades motion when involved in the blade/casing interaction process. Initial checkout of this procedure was accomplished using several of the tests run during the program.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
Active Control of Inlet Noise on the JT15D Turbofan Engine
NASA Technical Reports Server (NTRS)
Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.
1999-01-01
This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.
Kitanaka, Nobue; Kitanaka, Junichi; Hall, F. Scott; Uhl, George R.; Watabe, Kaname; Kubo, Hitoshi; Takahashi, Hitoshi; Tatsuta, Tomohiro; Morita, Yoshio; Takemura, Motohiko
2014-01-01
Repeated intermittent administration of amphetamines acutely increases appetitive and consummatory aspects of motivated behaviors as well as general activity and exploratory behavior, including voluntary running wheel activity. Subsequently, if the drug is withdrawn, the frequency of these behaviors decrease, which is thought to be indicative of dysphoric symptoms associated with amphetamine withdrawal. Such decreases may be observed after chronic treatment or even after single drug administrations. In the present study, the effect of acute methamphetamine (METH) on running wheel activity, horizontal locomotion, appetitive behavior (food access), and consummatory behavior (food and water intake) was investigated in mice. A multi-configuration behavior apparatus designed to monitor the five behaviors was developed, where combined measures were recorded simultaneously. In the first experiment, naïve male ICR mice showed gradually increasing running wheel activity over three consecutive days after exposure to a running wheel, while mice without a running wheel showed gradually decreasing horizontal locomotion, consistent with running wheel activity being a positively motivated form of natural motor activity. In experiment 2, increased horizontal locomotion and food access, and decreased food intake, were observed for the initial 3 h after acute METH challenge. Subsequently, during the dark phase period decreased running wheel activity and horizontal locomotion were observed. The reductions in running wheel activity and horizontal locomotion may be indicative of reduced dopaminergic function, although it remains to be seen if these changes may be more pronounced after more prolonged METH treatments. PMID:22079320
Robin J. Tausch
2015-01-01
A theoretically based analytic model of plant growth in single species conifer communities based on the species fully occupying a site and fully using the site resources is introduced. Model derivations result in a single equation simultaneously describes changes over both, different site conditions (or resources available), and over time for each variable for each...
NASA Astrophysics Data System (ADS)
Mohaghegh, Shahab
2010-05-01
Surrogate Reservoir Model (SRM) is new solution for fast track, comprehensive reservoir analysis (solving both direct and inverse problems) using existing reservoir simulation models. SRM is defined as a replica of the full field reservoir simulation model that runs and provides accurate results in real-time (one simulation run takes only a fraction of a second). SRM mimics the capabilities of a full field model with high accuracy. Reservoir simulation is the industry standard for reservoir management. It is used in all phases of field development in the oil and gas industry. The routine of simulation studies calls for integration of static and dynamic measurements into the reservoir model. Full field reservoir simulation models have become the major source of information for analysis, prediction and decision making. Large prolific fields usually go through several versions (updates) of their model. Each new version usually is a major improvement over the previous version. The updated model includes the latest available information incorporated along with adjustments that usually are the result of single-well or multi-well history matching. As the number of reservoir layers (thickness of the formations) increases, the number of cells representing the model approaches several millions. As the reservoir models grow in size, so does the time that is required for each run. Schemes such as grid computing and parallel processing helps to a certain degree but do not provide the required speed for tasks such as: field development strategies using comprehensive reservoir analysis, solving the inverse problem for injection/production optimization, quantifying uncertainties associated with the geological model and real-time optimization and decision making. These types of analyses require hundreds or thousands of runs. Furthermore, with the new push for smart fields in the oil/gas industry that is a natural growth of smart completion and smart wells, the need for real time reservoir modeling becomes more pronounced. SRM is developed using the state of the art in neural computing and fuzzy pattern recognition to address the ever growing need in the oil and gas industry to perform accurate, but high speed simulation and modeling. Unlike conventional geo-statistical approaches (response surfaces, proxy models …) that require hundreds of simulation runs for development, SRM is developed only with a few (from 10 to 30 runs) simulation runs. SRM can be developed regularly (as new versions of the full field model become available) off-line and can be put online for real-time processing to guide important decisions. SRM has proven its value in the field. An SRM was developed for a giant oil field in the Middle East. The model included about one million grid blocks with more than 165 horizontal wells and took ten hours for a single run on 12 parallel CPUs. Using only 10 simulation runs, an SRM was developed that was able to accurately mimic the behavior of the reservoir simulation model. Performing a comprehensive reservoir analysis that included making millions of SRM runs, wells in the field were divided into five clusters. It was predicted that wells in cluster one & two are best candidates for rate relaxation with minimal, long term water production while wells in clusters four and five are susceptive to high water cuts. Two and a half years and 20 wells later, rate relaxation results from the field proved that all the predictions made by the SRM analysis were correct. While incremental oil production increased in all wells (wells in clusters 1 produced the most followed by wells in cluster 2, 3 …) the percent change in average monthly water cut for wells in each cluster clearly demonstrated the analytic power of SRM. As it was correctly predicted, wells in clusters 1 and 2 actually experience a reduction in water cut while a substantial increase in water cut was observed in wells classified into clusters 4 and 5. Performing these analyses would have been impossible using the original full field simulation model.
[The concept of the development of the state of chemical-analytical environmental monitoring].
Rakhmanin, Iu A; Malysheva, A G
2013-01-01
Chemical and analytical monitoring of the quality of environment is based on the accounting of the trace amount of substances. Considering the multicomponent composition of the environment and running processes of transformation of substances in it, in determination of the danger of the exposure to the chemical pollution of environment on population health there is necessary evaluation based on the simultaneous account of complex of substances really contained in the environment and supplying from different sources. Therefore, in the analytical monitoring of the quality and safety of the environment there is a necessary conversion from the orientation, based on the investigation of specific target substances, to estimation of real complex of compounds.
Basic forensic identification of artificial leather for hit-and-run cases.
Sano, Tetsuya; Suzuki, Shinichi
2009-11-20
Single fibers retrieved from a victim's garments and adhered to the suspect's automobile have frequently been used to prove the relationship between victim and suspect's automobile. Identification method for single fiber discrimination has already been conducted. But, a case was encountered requiring discrimination of artificial leather fragments retrieved from the victim's bag and fused fibers from the bumper of the suspect's automobile. In this report, basic studies were conducted on identification of artificial leathers and single fibers from leather materials. Fiber morphology was observed using scanning electron microscopy (SEM), color of these leather sheets was evaluated by microspectrophotometry (MSP), the leather components were measured by infrared micro spectrometry (micro-FT-IR) and the inorganic contents were ascertained by micro-X-ray fluorescence spectrometry (micro-XRF). These two methods contribute to other analytical methods too, in the case of utilized single fiber analytical methods. The combination of these techniques showed high potential of discrimination ability in forensic examinations of these artificial leather samples. In regard with smooth surface artificial leather sheet samples, a total of 182 sheets were obtained, including 177 colored sheets directly from 10 of 24 manufacturers in Japan, and five of them were purchased at retail circulation products. Nine samples of suede-like artificial leather were obtained, 6 of them were supplied from 2 manufacturers and 3 sheets were purchased as retailing product. Single fibers from the smooth surface artificial leather sheets showed characteristic for surface markings, and XRF could effectively discriminate between these sheets. The combination of results of micro-FT-IR, color evaluation by MSP and the contained inorganic elements by XRF enabled to discriminate about 92% of 15,576 pairs comparison. Five smooth surface samples form retailing products were discriminated by their chemical composition into four categories, and in addition color information to this result, they were clearly distinguished. Suede-like artificial leather sheets showed characteristic extra-fine fibers on their surface by the observation of SEM imaging, providing high discriminating ability, in regard with suede-like artificial leather sheets were divided into three categories by micro-FT-IR, and the combination of these results and color evaluation information, it was possible to discriminate all the nine suede-like artificial leather sheets examined.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Real-Time Analytics for the Healthcare Industry: Arrhythmia Detection.
Agneeswaran, Vijay Srinivas; Mukherjee, Joydeb; Gupta, Ashutosh; Tonpay, Pranay; Tiwari, Jayati; Agarwal, Nitin
2013-09-01
It is time for the healthcare industry to move from the era of "analyzing our health history" to the age of "managing the future of our health." In this article, we illustrate the importance of real-time analytics across the healthcare industry by providing a generic mechanism to reengineer traditional analytics expressed in the R programming language into Storm-based real-time analytics code. This is a powerful abstraction, since most data scientists use R to write the analytics and are not clear on how to make the data work in real-time and on high-velocity data. Our paper focuses on the applications necessary to a healthcare analytics scenario, specifically focusing on the importance of electrocardiogram (ECG) monitoring. A physician can use our framework to compare ECG reports by categorization and consequently detect Arrhythmia. The framework can read the ECG signals and uses a machine learning-based categorizer that runs within a Storm environment to compare different ECG signals. The paper also presents some performance studies of the framework to illustrate the throughput and accuracy trade-off in real-time analytics.
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
NASA Astrophysics Data System (ADS)
Yang, Yao-Joe; Kuo, Wen-Cheng; Fan, Kuang-Chao
2006-01-01
In this work, we present a single-run single-mask (SRM) process for fabricating suspended high-aspect-ratio structures on standard silicon wafers using an inductively coupled plasma-reactive ion etching (ICP-RIE) etcher. This process eliminates extra fabrication steps which are required for structure release after trench etching. Released microstructures with 120 μm thickness are obtained by this process. The corresponding maximum aspect ratio of the trench is 28. The SRM process is an extended version of the standard process proposed by BOSCH GmbH (BOSCH process). The first step of the SRM process is a standard BOSCH process for trench etching, then a polymer layer is deposited on trench sidewalls as a protective layer for the subsequent structure-releasing step. The structure is released by dry isotropic etching after the polymer layer on the trench floor is removed. All the steps can be integrated into a single-run ICP process. Also, only one mask is required. Therefore, the process complexity and fabrication cost can be effectively reduced. Discussions on each SRM step and considerations for avoiding undesired etching of the silicon structures during the release process are also presented.
Psychophysical spectro-temporal receptive fields in an auditory task.
Shub, Daniel E; Richards, Virginia M
2009-05-01
Psychophysical relative weighting functions, which provide information about the importance of different regions of a stimulus in forming decisions, are traditionally estimated using trial-based procedures, where a single stimulus is presented and a single response is recorded. Everyday listening is much more "free-running" in that we often must detect randomly occurring signals in the presence of a continuous background. Psychophysical relative weighting functions have not been measured with free-running paradigms. Here, we combine a free-running paradigm with the reverse correlation technique used to estimate physiological spectro-temporal receptive fields (STRFs) to generate psychophysical relative weighting functions that are analogous to physiological STRFs. The psychophysical task required the detection of a fixed target signal (a sequence of spectro-temporally coherent tone pips with a known frequency) in the presence of a continuously presented informational masker (spectro-temporally random tone pips). A comparison of psychophysical relative weighting functions estimated with the current free-running paradigm and trial-based paradigms, suggests that in informational-masking tasks subjects' decision strategies are similar in both free-running and trial-based paradigms. For more cognitively challenging tasks there may be differences in the decision strategies with free-running and trial-based paradigms.
VERTPAK1. Code Verification Analytic Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golis, M.J.
1983-04-01
VERTPAK1 is a package of analytical solutions used in verification of numerical codes that simulate fluid flow, rock deformation, and solute transport in fractured and unfractured porous media. VERTPAK1 contains the following: BAREN, an analytical solution developed by Barenblatt, Zhelton and Kochina (1960) for describing transient flow to a well penetrating a (double porosity) confined aquifer; GIBMAC, an analytical solution developed by McNamee and Gibson (1960) for describing consolidation of a semi-infinite soil medium subject to a strip (plane strain) or cylindrical (axisymmetric) loading; GRINRH, an analytical solution developed by Gringarten (1971) for describing transient flow to a partially penetratingmore » well in a confined aquifer containing a single horizontal fracture; GRINRV, an analytical solution developed by Gringarten, Ramey, and Raghavan (1974) for describing transient flow to a fully penetrating well in a confined aquifer containing a single vertical fracture; HART, an analytical solution given by Nowacki (1962) and implemented by HART (1981) for describing the elastic behavior of an infinite solid subject to a line heat source; LESTER, an analytical solution presented by Lester, Jansen, and Burkholder (1975) for describing one-dimensional transport of radionuclide chains through an adsorbing medium; STRELT, an analytical solution presented by Streltsova-Adams (1978) for describing transient flow to a fully penetrating well in a (double porosity) confined aquifer; and TANG, an analytical solution developed by Tang, Frind, and Sudicky (1981) for describing solute transport in a porous medium containing a single fracture.« less
The Effects of a Duathlon Simulation on Ventilatory Threshold and Running Economy
Berry, Nathaniel T.; Wideman, Laurie; Shields, Edgar W.; Battaglini, Claudio L.
2016-01-01
Multisport events continue to grow in popularity among recreational, amateur, and professional athletes around the world. This study aimed to determine the compounding effects of the initial run and cycling legs of an International Triathlon Union (ITU) Duathlon simulation on maximal oxygen uptake (VO2max), ventilatory threshold (VT) and running economy (RE) within a thermoneutral, laboratory controlled setting. Seven highly trained multisport athletes completed three trials; Trial-1 consisted of a speed only VO2max treadmill protocol (SOVO2max) to determine VO2max, VT, and RE during a single-bout run; Trial-2 consisted of a 10 km run at 98% of VT followed by an incremental VO2max test on the cycle ergometer; Trial-3 consisted of a 10 km run and 30 km cycling bout at 98% of VT followed by a speed only treadmill test to determine the compounding effects of the initial legs of a duathlon on VO2max, VT, and RE. A repeated measures ANOVA was performed to determine differences between variables across trials. No difference in VO2max, VT (%VO2max), maximal HR, or maximal RPE was observed across trials. Oxygen consumption at VT was significantly lower during Trial-3 compared to Trial-1 (p = 0.01). This decrease was coupled with a significant reduction in running speed at VT (p = 0.015). A significant interaction between trial and running speed indicate that RE was significantly altered during Trial-3 compared to Trial-1 (p < 0.001). The first two legs of a laboratory based duathlon simulation negatively impact VT and RE. Our findings may provide a useful method to evaluate multisport athletes since a single-bout incremental treadmill test fails to reveal important alterations in physiological thresholds. Key points Decrease in relative oxygen uptake at VT (ml·kg-1·min-1) during the final leg of a duathlon simulation, compared to a single-bout maximal run. We observed a decrease in running speed at VT during the final leg of a duathlon simulation; resulting in an increase of more than 2 minutes to complete a 5 km run. During our study, highly trained athletes were unable to complete the final 5 km run at the same intensity that they completed the initial 10 km run (in a laboratory setting). A better understanding, and determination, of training loads during multisport training may help to better periodize training programs; additional research is required. PMID:27274661
LABORATORY CAPACITY NEEDS ASSESSMENT OF DRINKING WATER UTILITIES: A GLOBAL PERSPECTIVE
Fully-functioning analytical laboratories capable of producing quality data are essential components of well-run drinking water utilities. In Europe and the US, drinking water laboratory performance is closely monitored and regulated; this is not always the case in the less indu...
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Internet-based interface for STRMDEPL08
Reeves, Howard W.; Asher, A. Jeremiah
2010-01-01
The core of the computer program STRMDEPL08 that estimates streamflow depletion by a pumping well with one of four analytical solutions was re-written in the Javascript software language and made available through an internet-based interface (web page). In the internet-based interface, the user enters data for one of the four analytical solutions, Glover and Balmer (1954), Hantush (1965), Hunt (1999), and Hunt (2003), and the solution is run for constant pumping for a desired number of simulation days. Results are returned in tabular form to the user. For intermittent pumping, the interface allows the user to request that the header information for an input file for the stand-alone executable STRMDEPL08 be created. The user would add the pumping information to this header information and run the STRMDEPL08 executable that is available for download through the U.S. Geological Survey. Results for the internet-based and stand-alone versions of STRMDEPL08 are shown to match.
Rapid ultrasensitive single particle surface-enhanced Raman spectroscopy using metallic nanopores.
Cecchini, Michael P; Wiener, Aeneas; Turek, Vladimir A; Chon, Hyangh; Lee, Sangyeop; Ivanov, Aleksandar P; McComb, David W; Choo, Jaebum; Albrecht, Tim; Maier, Stefan A; Edel, Joshua B
2013-10-09
Nanopore sensors embedded within thin dielectric membranes have been gaining significant interest due to their single molecule sensitivity and compatibility of detecting a large range of analytes, from DNA and proteins, to small molecules and particles. Building on this concept we utilize a metallic Au solid-state membrane to translocate and rapidly detect single Au nanoparticles (NPs) functionalized with 589 dye molecules using surface-enhanced resonance Raman spectroscopy (SERRS). We show that, due to the plasmonic coupling between the Au metallic nanopore surface and the NP, signal intensities are enhanced when probing analyte molecules bound to the NP surface. Although not single molecule, this nanopore sensing scheme benefits from the ability of SERRS to provide rich vibrational information on the analyte, improving on current nanopore-based electrical and optical detection techniques. We show that the full vibrational spectrum of the analyte can be detected with ultrahigh spectral sensitivity and a rapid temporal resolution of 880 μs.
NASA Astrophysics Data System (ADS)
Vijayashree, M.; Uthayakumar, R.
2017-09-01
Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.
Long, Leroy L; Srinivasan, Manoj
2013-04-06
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.
Sun, Qian; Chang, Lu; Ren, Yanping; Cao, Liang; Sun, Yingguang; Du, Yingfeng; Shi, Xiaowei; Wang, Qiao; Zhang, Lantong
2012-11-01
A novel method based on high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry was developed for simultaneous determination of the 11 major active components including ten flavonoids and one phenolic acid in Cirsium setosum. Separation was performed on a reversed-phase C(18) column with gradient elution of methanol and 0.1‰ acetic acid (v/v). The identification and quantification of the analytes were achieved on a hybrid quadrupole linear ion trap mass spectrometer. Multiple-reaction monitoring scanning was employed for quantification with switching electrospray ion source polarity between positive and negative modes in a single run. Full validation of the assay was carried out including linearity, precision, accuracy, stability, limits of detection and quantification. The results demonstrated that the method developed was reliable, rapid, and specific. The 25 batches of C. setosum samples from different sources were first determined using the developed method and the total contents of 11 analytes ranged from 1717.460 to 23028.258 μg/g. Among them, the content of linarin was highest, and its mean value was 7340.967 μg/g. Principal component analysis and hierarchical clustering analysis were performed to differentiate and classify the samples, which is helpful for comprehensive evaluation of the quality of C. setosum. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Small field models with gravitational wave signature supported by CMB data
Brustein, Ramy
2018-01-01
We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608
Van Dam, Debby; Vermeiren, Yannick; Aerts, Tony; De Deyn, Peter Paul
2014-08-01
A fast and simple RP-HPLC method with electrochemical detection (ECD) and ion pair chromatography was developed, optimized and validated in order to simultaneously determine eight different biogenic amines and metabolites in post-mortem human brain tissue in a single-run analytical approach. The compounds of interest are the indolamine serotonin (5-hydroxytryptamine, 5-HT), the catecholamines dopamine (DA) and (nor)epinephrine ((N)E), as well as their respective metabolites, i.e. 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA), 5-hydroxy-3-indoleacetic acid (5-HIAA) and 3-methoxy-4-hydroxyphenylglycol (MHPG). A two-level fractional factorial experimental design was applied to study the effect of five experimental factors (i.e. the ion-pair counter concentration, the level of organic modifier, the pH of the mobile phase, the temperature of the column, and the voltage setting of the detector) on the chromatographic behaviour. The cross effect between the five quantitative factors and the capacity and separation factors of the analytes were then analysed using a Standard Least Squares model. The optimized method was fully validated according to the requirements of SFSTP (Société Française des Sciences et Techniques Pharmaceutiques). Our human brain tissue sample preparation procedure is straightforward and relatively short, which allows samples to be loaded onto the HPLC system within approximately 4h. Additionally, a high sample throughput was achieved after optimization due to a total runtime of maximally 40min per sample. The conditions and settings of the HPLC system were found to be accurate with high intra and inter-assay repeatability, recovery and accuracy rates. The robust analytical method results in very low detection limits and good separation for all of the eight biogenic amines and metabolites in this complex mixture of biological analytes. Copyright © 2014 Elsevier B.V. All rights reserved.
Street-running LRT may not affect a neighbour's sleep
NASA Astrophysics Data System (ADS)
Sarkar, S. K.; Wang, J.-N.
2003-10-01
A comprehensive dynamic finite difference model and analysis was conducted simulating LRT running at the speed of 24 km/h on a city street. The analysis predicted ground borne vibration (GBV) to remain at or below the FTA criterion of a RMS velocity of 72 VdB (0.004 in/s) at the nearest residence. In the model, site-specific stratography and dynamic soil and rock properties were used that were determined from in situ testing. The dynamic input load from LRT vehicle running at 24 km/h was computed from actual measured data from Portland, Oregon's West Side LRT project, which used a low floor vehicle similar to the one proposed for the NJ Transit project. During initial trial runs of the LRT system, vibration and noise measurements were taken at three street locations while the vehicles were running at about the 20-24 km/h operating speed. The measurements confirmed the predictions and satisfied FTA criteria for noise and vibration for frequent events. This paper presents the analytical model, GBV predictions, site measurement data and comparison with FTA criterion.
Badrick, Tony; Graham, Peter
2018-03-28
Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.
Chang, Yuwei; Zhao, Chunxia; Wu, Zeming; Zhou, Jia; Zhao, Sumin; Lu, Xin; Xu, Guowang
2012-08-01
In this work a chip-based nano HPLC coupled MS (HPLC-chip/MS) method with a simple sample preparation procedure was developed for the flavonoid profiling of soybean. The analytical properties of the method including the linearity (R(2) , 0.992-0.995), reproducibility (RSD, 1.50-7.66%), intraday precision (RSD, 1.41-5.14%) and interday precision (RSD, 2.76-16.90%) were satisfactory. Compared with the conventional HPLC/MS method, a fast extraction and analysis procedure was applied and more flavonoids were detected in a single run. Additionally, 13 flavonoids in soybean seed were identified for the first time. The method was then applied to the profiling of six varieties of soybean sowed at the same place. A clear discrimination was observed among different cultivars, three isoflavones, accounting for nearly 80% of total flavonoid contents, were found increased in the spring soybeans compared with the summer cultivars. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds
NASA Astrophysics Data System (ADS)
Li, Rui; Chen, Lei; Li, Wen-Syan
Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less
Multielemental speciation analysis by advanced hyphenated technique - HPLC/ICP-MS: A review.
Marcinkowska, Monika; Barałkiewicz, Danuta
2016-12-01
Speciation analysis has become an invaluable tool in human health risk assessment, environmental monitoring or food quality control. Another step is to develop reliable multielemental speciation methodologies, to reduce costs, waste and time needed for the analysis. Separation and detection of species of several elements in a single analytical run can be accomplished by high performance liquid chromatography hyphenated to inductively coupled plasma mass spectrometry (HPLC/ICP-MS). Our review assembles articles concerning multielemental speciation determination of: As, Se, Cr, Sb, I, Br, Pb, Hg, V, Mo, Te, Tl, Cd and W in environmental, biological, food and clinical samples analyzed with HPLC/ICP-MS. It addresses the procedures in terms of following issues: sample collection and pretreatment, selection of optimal conditions for elements species separation by HPLC and determination using ICP-MS as well as metrological approach. The presented work is the first review article concerning multielemental speciation analysis by advanced hyphenated technique HPLC/ICP-MS. Copyright © 2016 Elsevier B.V. All rights reserved.
Advanced ceramic coating development for industrial/utility gas turbine applications
NASA Technical Reports Server (NTRS)
Andersson, C. A.; Lau, S. K.; Bratton, R. J.; Lee, S. Y.; Rieke, K. L.; Allen, J.; Munson, K. E.
1982-01-01
The effects of ceramic coatings on the lifetimes of metal turbine components and on the performance of a utility turbine, as well as of the turbine operational cycle on the ceramic coatings were determined. When operating the turbine under conditions of constant cooling flow, the first row blades run 55K cooler, and as a result, have 10 times the creep rupture life, 10 times the low cycle fatigue life and twice the corrosion life with only slight decreases in both specific power and efficiency. When operating the turbine at constant metal temperature and reduced cooling flow, both specific power and efficiency increases, with no change in component lifetime. The most severe thermal transient of the turbine causes the coating bond stresses to approach 60% of the bond strengths. Ceramic coating failures was studied. Analytic models based on fracture mechanics theories, combined with measured properties quantitatively assessed both single and multiple thermal cycle failures which allowed the prediction of coating lifetime. Qualitative models for corrosion failures are also presented.
Effects of acceleration rate on Rayleigh-Taylor instability in elastic-plastic materials
NASA Astrophysics Data System (ADS)
Banerjee, Arindam; Polavarapu, Rinosh
2016-11-01
The effect of acceleration rate in the elastic-plastic transition stage of Rayleigh-Taylor instability in an accelerated non-Newtonian material is investigated experimentally using a rotating wheel experiment. A non-Newtonian material (mayonnaise) was accelerated at different rates by varying the angular acceleration of a rotating wheel and growth patterns of single mode perturbations with different combinations of amplitude and wavelength were analyzed. Experiments were run at two different acceleration rates to compare with experiments presented in prior years at APS DFD meetings and the peak amplitude responses are captured using a high-speed camera. Similar to the instability acceleration, the elastic-plastic transition acceleration is found to be increasing with increase in acceleration rate for a given amplitude and wavelength. The experimental results will be compared to various analytical strength models and prior experimental studies using Newtonian fluids. Authors acknowledge funding support from Los Alamos National Lab subcontract(370333) and DOE-SSAA Grant (DE-NA0001975).
A sub-minute electrophoretic method for simultaneous determination of naphazoline and zinc.
Ribeiro, Michelle M A C; Oliveira, Thiago C; Batista, Alex D; Muñoz, Rodrigo A A; Richter, Eduardo M
2016-11-11
This paper reports for the first time, a method for simultaneous determination of naphazoline (NPZ) and zinc (Zn) using an analytical separation technique (capillary electrophoresis with capacitively coupled contactless conductivity detection -CE-C 4 D). A single run is possible every 55s (sampling rate=65h -1 ). The separation by CE-C 4 D was achieved on a fused silica capillary (50cm length - 10cm effective, 50μm i.d.) with a background electrolyte (BGE) composed by 20mmolL -1 of 2-(morpholin-4-yl)ethane-1-sulfonic acid (MES) and 20mmolL -1 of histidine (HIS) (pH 6.0). Detection limits were estimated at 20 and 30μmolL -1 and recovery values for spiked samples were 98 and 102% for NPZ and Zn, respectively. The developed procedure was compared to HPLC (NPZ) and FAAS (Zn) and no statistically significant differences were observed (95% confidence level). Copyright © 2016 Elsevier B.V. All rights reserved.
The dance of the honeybee: how do honeybees dance to transfer food information effectively?
Okada, R; Ikeno, H; Sasayama, Noriko; Aonuma, H; Kurabayashi, D; Ito, E
2008-01-01
A honeybee informs her nestmates of the location of a flower she has visited by a unique behavior called a "waggle dance." On a vertical comb, the direction of the waggle run relative to gravity indicates the direction to the food source relative to the sun in the field, and the duration of the waggle run indicates the distance to the food source. To determine the detailed biological features of the waggle dance, we observed worker honeybee behavior in the field. Video analysis showed that the bee does not dance in a single or random place in the hive but waggled several times in one place and then several times in another. It also showed that the information of the waggle dance contains a substantial margin of error. Angle and duration of waggle runs varied from run to run, with the range of +/-15 degrees and +/-15%, respectively, even in a series of waggle dances of a single individual. We also found that most dance followers that listen to the waggle dance left the dancer after one or two sessions of listening.
NASA Astrophysics Data System (ADS)
Escobar Gómez, J. D.; Torres-Verdín, C.
2018-03-01
Single-well pressure-diffusion simulators enable improved quantitative understanding of hydraulic-testing measurements in the presence of arbitrary spatial variations of rock properties. Simulators of this type implement robust numerical algorithms which are often computationally expensive, thereby making the solution of the forward modeling problem onerous and inefficient. We introduce a time-domain perturbation theory for anisotropic permeable media to efficiently and accurately approximate the transient pressure response of spatially complex aquifers. Although theoretically valid for any spatially dependent rock/fluid property, our single-phase flow study emphasizes arbitrary spatial variations of permeability and anisotropy, which constitute key objectives of hydraulic-testing operations. Contrary to time-honored techniques, the perturbation method invokes pressure-flow deconvolution to compute the background medium's permeability sensitivity function (PSF) with a single numerical simulation run. Subsequently, the first-order term of the perturbed solution is obtained by solving an integral equation that weighs the spatial variations of permeability with the spatial-dependent and time-dependent PSF. Finally, discrete convolution transforms the constant-flow approximation to arbitrary multirate conditions. Multidimensional numerical simulation studies for a wide range of single-well field conditions indicate that perturbed solutions can be computed in less than a few CPU seconds with relative errors in pressure of <5%, corresponding to perturbations in background permeability of up to two orders of magnitude. Our work confirms that the proposed joint perturbation-convolution (JPC) method is an efficient alternative to analytical and numerical solutions for accurate modeling of pressure-diffusion phenomena induced by Neumann or Dirichlet boundary conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin
In this study, we have developed an analytical solution for thermal single-well injection-withdrawal tests in horizontally fractured reservoirs where fluid flow through the fracture is radial. The dimensionless forms of the governing equations and the initial and boundary conditions in the radial flow system can be written in a form identical to those in the linear flow system developed by Jung and Pruess [Jung, Y., and K. Pruess (2012), A Closed-Form Analytical Solution for Thermal Single-Well Injection-Withdrawal Tests, Water Resour. Res., 48, W03504, doi:10.1029/2011WR010979], and therefore the analytical solutions developed in Jung and Pruess (2012) can be applied to computemore » the time dependence of temperature recovery at the injection/withdrawal well in a horizontally oriented fracture with radial flow.« less
Volume 2: Compendium of Abstracts
2017-06-01
simulation work using a standard running model for legged systems, the Spring Loaded Inverted Pendulum (SLIP) Model. In this model, the dynamics of a single...bar SLIP model is analyzed using a basin of attraction analyses to determine the optimal configuration for running at different velocities and...acquisition, and the automatic target acquisition were then compared to each other. After running trials with the current system, it will be
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
Code of Federal Regulations, 2010 CFR
2010-01-01
... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...
Code of Federal Regulations, 2013 CFR
2013-01-01
... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...
Code of Federal Regulations, 2014 CFR
2014-01-01
... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...
Code of Federal Regulations, 2012 CFR
2012-01-01
... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...
Code of Federal Regulations, 2011 CFR
2011-01-01
... means a group of validity screening tests that were made from the same starting material. ... past 24 hours. Adulterated specimen means a urine specimen that has been altered, as evidenced by test... whole specimen. Analytical run means the process of testing a group of urine specimens for validity or...
40 CFR 799.9410 - TSCA chronic toxicity.
Code of Federal Regulations, 2012 CFR
2012-07-01
... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...
40 CFR 799.9410 - TSCA chronic toxicity.
Code of Federal Regulations, 2011 CFR
2011-07-01
... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...
40 CFR 799.9410 - TSCA chronic toxicity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...
40 CFR 799.9410 - TSCA chronic toxicity.
Code of Federal Regulations, 2014 CFR
2014-07-01
... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...
40 CFR 799.9410 - TSCA chronic toxicity.
Code of Federal Regulations, 2013 CFR
2013-07-01
... should be used, if possible, throughout the duration of the study, and the research sample should be... continuously or intermittently depending on the method of analysis. Chamber concentration may be measured using gravimetric or analytical methods, as appropriate. If trial run measurements are reasonably consistent (±10...
40 CFR 86.537-90 - Dynamometer test runs.
Code of Federal Regulations, 2014 CFR
2014-07-01
... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...
40 CFR 86.537-90 - Dynamometer test runs.
Code of Federal Regulations, 2012 CFR
2012-07-01
... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...
40 CFR 86.537-90 - Dynamometer test runs.
Code of Federal Regulations, 2013 CFR
2013-07-01
... “transient” formaldehyde exhaust sample, the “transient” dilution air sample bag, the “transient” methanol... start “transient” exhaust and dilution air bag samples to the analytical system and process the samples... Section 86.537-90 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard
2017-11-06
Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.
NASA Astrophysics Data System (ADS)
Liu, Junliang; Zhang, Tingfa; Li, Yongfu; Ding, Lei; Tao, Junchao; Wang, Ying; Wang, Qingpu; Fang, Jiaxiong
2017-07-01
A free-running single-photon detector for 1.06 μm wavelength based on an InGaAsP/InP single-photon avalanche diode is presented. The detector incorporates an ultra-fast active-quenching technique to greatly lessen the afterpulsing effects. An improved method for avalanche characterization using electroluminescence is proposed, and the performance of the detector is evaluated. The number of avalanche carriers is as low as 1.68 ×106 , resulting in a low total afterpulse probability of 4% at 233 K, 10% detection efficiency, and 1 μs hold-off time.
DOT National Transportation Integrated Search
1994-10-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING LVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES.
Donato, J L; Koizumi, F; Pereira, A S; Mendes, G D; De Nucci, G
2012-06-15
In the present study, a fast, sensitive and robust method to quantify dextromethorphan, dextrorphan and doxylamine in human plasma using deuterated internal standards (IS) is described. The analytes and the IS were extracted from plasma by a liquid-liquid extraction (LLE) using diethyl-ether/hexane (80/20, v/v). Extracted samples were analyzed by high performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). Chromatographic separation was performed by pumping the mobile phase (acetonitrile/water/formic acid (90/9/1, v/v/v) during 4.0min at a flow-rate of 1.5 mL min⁻¹ into a Phenomenex Gemini® C18, 5 μm analytical column (150 × 4.6 mm i.d.). The calibration curve was linear over the range from 0.2 to 200 ng mL⁻¹ for dextromethorphan and doxylamine and 0.05 to 10 ng mL⁻¹ for dextrorphan. The intra-batch precision and accuracy (%CV) of the method ranged from 2.5 to 9.5%, and 88.9 to 105.1%, respectively. Method inter-batch precision (%CV) and accuracy ranged from 6.7 to 10.3%, and 92.2 to 107.1%, respectively. The run-time was for 4 min. The analytical procedure herein described was used to assess the pharmacokinetics of dextromethorphan, dextrorphan and doxylamine in healthy volunteers after a single oral dose of a formulation containing 30 mg of dextromethorphan hydrobromide and 12.5mg of doxylamine succinate. The method has high sensitivity, specificity and allows high throughput analysis required for a pharmacokinetic study. Copyright © 2012 Elsevier B.V. All rights reserved.
D'Avolio, Antonio; Simiele, Marco; Siccardi, Marco; Baietto, Lorena; Sciandra, Mauro; Bonora, Stefano; Di Perri, Giovanni
2010-09-05
A bioanalytical method for the determination of most commonly prescribed protease inhibitors (saquinavir, atazanavir, amprenavir, darunavir, lopinavir and ritonavir) and non-nucleoside reverse transcriptase inhibitors (etravirine, efavirenz and nevirapine) was developed, modifying our previous HPLC-MS chromatographic run, validated and a complete short and long term stability evaluation was carried out. One hundred microlitres of plasma were distributed on a collection glass paper filter (Glass-Microfibre from Sartorius), then the filter underwent thermal treatment, both for drying and for HIV inactivation, and stored at room temperature, 4 degrees C and -20 degrees C. The analytes were extracted from the filter disc using tert-butylmethylether with basic pH, after the addition of the internal standards quinoxaline. The extract was dried, reconstituted and the chromatographic separation was performed on a reversed-phase C-18 column (150 mm x 2.0 mm) and the analytes were quantified using a single quadrupole mass spectrometer. The method was validated considering the concentration ranges encountered in clinical trials and the routine clinical practice. The assay was linear over the concentration ranges tested. Accuracies ranged from 92.1% to 111.9% and intra-day and inter-day relative standard deviation for all quality control levels ranged from 0.2 to 12.9 and 3.1 to 14.4, respectively. Analytes in dried plasma spots were stable for longer time when dried/inactivation step was carried out before storage compared to samples not dried/inactivated before the analysis. The dried/inactivation step allows shipment of samples at room temperature without any risks, therefore the developed and validated method enables an easy and cheap sample shipment for therapeutic drug monitoring and pharmacokinetic studies. 2010 Elsevier B.V. All rights reserved.
Batrawi, Nidal; Wahdan, Shorouq; Abualhasan, Murad
2017-01-01
Medroxyprogesterone acetate is widely used in veterinary medicine as intravaginal dosage for the synchronization of breeding cycle in ewes and goats. The main goal of this study was to develop reverse-phase high-performance liquid chromatography method for the quantification of medroxyprogesterone acetate in veterinary vaginal sponges. A single high-performance liquid chromatography/UV isocratic run was used for the analytical assay of the active ingredient medroxyprogesterone. The chromatographic system consisted of a reverse-phase C18 column as the stationary phase and a mixture of 60% acetonitrile and 40% potassium dihydrogen phosphate buffer as the mobile phase; the pH was adjusted to 5.6. The method was validated according to the International Council for Harmonisation (ICH) guidelines. Forced degradation studies were also performed to evaluate the stability-indicating properties and specificity of the method. Medroxyprogesterone was eluted at 5.9 minutes. The linearity of the method was confirmed in the range of 0.0576 to 0.1134 mg/mL (R2 > 0.999). The limit of quantification was shown to be 3.9 µg/mL. Precision and accuracy ranges were found to be %RSD <0.2 and 98% to 102%, respectively. Medroxyprogesterone capacity factor value of 2.1, tailing factor value of 1.03, and resolution value of 3.9 were obtained in accordance with ICH guidelines. Based on the obtained results, a rapid, precise, accurate, sensitive, and cost-effective analysis procedure was proposed for quantitative determination of medroxyprogesterone in vaginal sponges. This analytical method is the only available method to analyse medroxyprogesterone in veterinary intravaginal dosage form. PMID:28469407
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golis, M.J.
1983-04-01
VERTPAK1 is a package of analytical solutions used in verification of numerical codes that simulate fluid flow, rock deformation, and solute transport in fractured and unfractured porous media. VERTPAK1 contains the following: BAREN, an analytical solution developed by Barenblatt, Zhelton and Kochina (1960) for describing transient flow to a well penetrating a (double porosity) confined aquifer; GIBMAC, an analytical solution developed by McNamee and Gibson (1960) for describing consolidation of a semi-infinite soil medium subject to a strip (plane strain) or cylindrical (axisymmetric) loading; GRINRH, an analytical solution developed by Gringarten (1971) for describing transient flow to a partially penetratingmore » well in a confined aquifer containing a single horizontal fracture; GRINRV, an analytical solution developed by Gringarten, Ramey, and Raghavan (1974) for describing transient flow to a fully penetrating well in a confined aquifer containing a single vertical fracture; HART, an analytical solution given by Nowacki (1962) and implemented by HART (1981) for describing the elastic behavior of an infinite solid subject to a line heat source; LESTER, an analytical solution presented by Lester, Jansen, and Burkholder (1975) for describing one-dimensional transport of radionuclide chains through an adsorbing medium; STRELT, an analytical solution presented by Streltsova-Adams (1978) for describing transient flow to a fully penetrating well in a (double porosity) confined aquifer; and TANG, an analytical solution developed by Tang, Frind, and Sudicky (1981) for describing solute transport in a porous medium containing a single fracture.« less
NASA Astrophysics Data System (ADS)
Di Prima, Simone; Bagarello, Vincenzo; Iovino, Massimo
2017-04-01
Simple infiltration experiments carried out in the field allow an easy and inexpensive way of characterizing soil hydraulic behavior, maintaining the functional connection of the sampled soil volume with the surrounding soil. The beerkan method consists of a three-dimensional (3D) infiltration experiment at zero pressure head (Haverkamp et al., 1996). It uses a simple annular ring inserted to a depth of about 0.01 m to avoid lateral loss of the ponded water. Soil disturbance is minimized by the limited ring insertion depth. Infiltration time of small volumes of water repeatedly poured on the confined soil are measured to determine the cumulative infiltration. Different algorithms based on this methodology (the so-called BEST family of algorithms) were developed for the determination of soil hydraulic characteristic parameters (Bagarello et al., 2014a; Lassabatere et al., 2006; Yilmaz et al., 2010). Recently, Bagarello et al. (2014b) developed a Simplified method based on a Beerkan Infiltration run (SBI method) to determine saturated soil hydraulic conductivity, Ks, by only the transient phase of a beerkan infiltration run and an estimate of the α* parameter, expressing the relative importance of gravity and capillary forces during an infiltration process (Reynolds and Elrick, 1990). However, several problems yet arise with the existing BEST-algorithms and the SBI method, including (i) the need of supplementary field and laboratory measurements (Bagarello et al., 2013); (ii) the difficulty to detect a linear relationship between I / √t and √t in the early stage of the infiltration process (Bagarello et al., 2014b); (iii) estimation of negative Ks values for hydrophobic soils (Di Prima et al., 2016). In this investigation, a new Simplified method based on the analysis of the Steady-state Beerkan Infiltration run (SSBI method) was proposed and tested. In particular, analytical data were generated to simulate beerkan infiltration experiments for six contrasting soils (sand, S; loamy sand, LS; sandy loam, SAL; loam, L; silt loam, SIL and silty clay loam, SCL) from UNSODA database and different initial water contents. Comparison with other existing procedures were also carried out. The SSBI method allowed accurate estimation of saturated soil hydraulic conductivity of both field and analytically generated data. For analytically generated data, the most accurate predictions were obtained with the method 2 by Wu et al. (1999) for the S and LS soils (prediction errors not exceeding 3.8%) and with the SSBI method for the other four soils (error < 3.7%). Therefore, this last method performed better than the other tested methods in most cases. The analysis of the field data supported the usability of the SSBI method in different environments and conditions to obtain an acceptable prediction of Ks, i.e. similar to the one that can be obtained with the BEST-steady algorithm (Bagarello et al., 2014a). Finally, this investigation yielded encouraging signs on the applicability of the SSBI method for a trustworthy estimation of Ks by the near steady-state phase of a beerkan infiltration run. REFERENCES Bagarello, V., Castellini, M., Di Prima, S., Giordano, G., Iovino, M., 2013. Testing a Simplified Approach to Determine Field Saturated Soil Hydraulic Conductivity. Procedia Environmental Sciences 19, 599-608. doi:10.1016/j.proenv.2013.06.068 Bagarello, V., Di Prima, S., Iovino, M., 2014a. Comparing Alternative Algorithms to Analyze the Beerkan Infiltration Experiment. Soil Science Society of America Journal 78, 724. doi:10.2136/sssaj2013.06.0231 Bagarello, V., Di Prima, S., Iovino, M., Provenzano, G., 2014b. Estimating field-saturated soil hydraulic conductivity by a simplified Beerkan infiltration experiment. Hydrological Processes 28, 1095-1103. doi:10.1002/hyp.9649 Di Prima, S., Lassabatere, L., Bagarello, V., Iovino, M., Angulo-Jaramillo, R., 2016. Testing a new automated single ring infiltrometer for Beerkan infiltration experiments. Geoderma 262, 20-34. doi:10.1016/j.geoderma.2015.08.006 Haverkamp, R., Arrúe, J., Vandervaere, J., Braud, I., Boulet, G., Laurent, J., Taha, A., Ross, P., Angulo-Jaramillo, R., 1996. Hydrological and thermal behaviour of the vadose zone in the area of Barrax and Tomelloso (Spain): Experimental study, analysis and modeling. Project UE n. EV5C-CT 92, 00-90. Lassabatere, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments—BEST. Soil Science Society of America Journal 70, 521. doi:10.2136/sssaj2005.0026 Reynolds, W.D., Elrick, D.E., 1990. Ponded Infiltration From a Single Ring: I. Analysis of Steady Flow. Soil Science Society of America Journal 54, 1233. doi:10.2136/sssaj1990.03615995005400050006x Wu, L., Pan, L., Mitchell, J., Sanden, B., 1999. Measuring Saturated Hydraulic Conductivity using a Generalized Solution for Single-Ring Infiltrometers. Soil Science Society of America Journal 63, 788. doi:10.2136/sssaj1999.634788x Yilmaz, D., Lassabatere, L., Angulo-Jaramillo, R., Deneele, D., Legret, M., 2010. Hydrodynamic Characterization of Basic Oxygen Furnace Slag through an Adapted BEST Method. Vadose Zone Journal 9, 107. doi:10.2136/vzj2009.0039
Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes
NASA Astrophysics Data System (ADS)
Goncharov, K. A.; Denisov, I. A.
2018-03-01
The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.
Evaluation of a handheld point-of-care analyser for measurement of creatinine in cats.
Reeve, Jenny; Warman, Sheena; Lewis, Daniel; Watson, Natalie; Papasouliotis, Kostas
2017-02-01
Objectives The aim of the study was to evaluate whether a handheld creatinine analyser (StatSensor Xpress; SSXp), available for human patients, can be used to measure creatinine reliably in cats. Methods Analytical performance was evaluated by determining within- and between-run coefficient of variation (CV, %), total error observed (TE obs , %) and sigma metrics. Fifty client-owned cats presenting for investigation of clinical disease had creatinine measured simultaneously, using SSXp (whole blood and plasma) and a reference instrument (Konelab, serum); 48 paired samples were included in the study. Creatinine correlation between methodologies (SSXp vs Konelab) and sample types (SSXp whole blood vs SSXp plasma ) was assessed by Spearman's correlation coefficient and agreement was determined using Bland-Altman difference plots. Each creatinine value was assigned an IRIS stage (1-4); correlation and agreement between Konelab and SSXp IRIS stages were evaluated. Results Within-run CV (4.23-8.85%), between-run CV (8.95-11.72%), TE obs (22.15-34.92%) and sigma metrics (⩽3) did not meet desired analytical requirements. Correlation between sample types was high (SSXp whole blood vs SSXp plasma ; r = 0.89), and between instruments was high (SSXp whole blood vs Konelab serum ; r = 0.85) to very high (SSXp plasma vs Konelab serum ; r = 0.91). Konelab and SSXp whole blood IRIS scores exhibited high correlation ( r = 0.76). Packed cell volume did not significantly affect SSXp determination of creatinine. Bland-Altman difference plots identified a positive bias for the SSXp (7.13 μmol/l SSXp whole blood ; 20.23 μmol/l SSXp plasma ) compared with the Konelab. Outliers (1/48 whole blood; 2/48 plasma) occurred exclusively at very high creatinine concentrations. The SSXp failed to identify 2/21 azotaemic cats. Conclusions and relevance Analytical performance of the SSXp in feline patients is not considered acceptable. The SSXp exhibited a high to very high correlation compared with the reference methodology but the two instruments cannot be used interchangeably. Improvements in the SSXp analytical performance are needed before its use can be recommended in feline clinical practice.
Li, Xiaowei; Guo, Ping; Shan, Yawen; Ke, Yuebin; Li, Hui; Fu, Qin; Wang, Yingyu; Liu, Tianhe; Xia, Xi
2017-05-26
This work reports the development of a multi-residue method for the identification and quantification of 82 veterinary drugs belonging to different chemical classes in swine waste lagoon. The proposed method applies a solid-phase extraction procedure with Oasis PRiME HLB cartridges that combines isolation of the compounds and sample clean-up in a single step. Analysis is performed by ultra-high performance liquid chromatography-tandem mass spectrometry, in one single injection with a chromatographic run time of only 9.5min. Linearity was studied in the range between 1 and 500μgkg -1 using standards prepared both in pure solvent and in the presence of matrix, showing coefficients of determination higher than 0.99 for all the analytes except for cefapirin in matrix. The average recoveries were in the range of 60-110% for most of the compounds tested with inter-day relative standard deviations below 17%. More than 97% of the investigated compounds had less or equal to a 5μgkg -1 quantitation limit in the studied matrix. Finally, the method was used with success to detect and quantify veterinary drugs residues in real samples with sulfonamides, quinolones, and tetracyclines being the most frequently determined compound groups. Copyright © 2017 Elsevier B.V. All rights reserved.
Detection of ricin in food using electrochemiluminescence-based technology.
Garber, Eric A E; O'Brien, Thomas W
2008-01-01
Ricin is a toxic ribosome inactivating protein (RIP-II) present in beans of the castor plant, Ricinus communis. Its potential as a biodefense threat has made the rapid, sensitive detection of ricin in food important to the U.S. Food and Drug Administration. Samples of juice, dairy products, soda, vegetables, bakery products, chocolate, and condiments were spiked with varying concentrations of ricin and analyzed using a 96-well format, electrochemiluminescence (ECL) immunoassay. Assay configurations included the use of a monoclonal capture antibody coupled with either a polyclonal or monoclonal detector antibody. The samples and detector antibodies were either added sequentially or in combination during the capture step. Using the polyclonal antibody, 0.04 ng/mL ricin was detected in analytical samples prepared from several beverages. By simultaneously incubating the sample with detector antibody, it was possible to decrease the assay time to a single 20 min incubation step with a limit of detection <10 ng/mL. Assays run according to this single incubation step exhibited a hook effect (decrease in signal at high concentrations of ricin), but because of the large signal-to-noise ratio associated with the ECL assay, the response remained above background and detectable. Thus, the ECL assay was uniquely suited for the screening of samples for ricin.
Simultaneous identification of synthetic and natural dyes in different food samples by UPLC-MS
NASA Astrophysics Data System (ADS)
Mandal, Badal Kumar; Mathiyalagan, Siva; Dalavai, Ramesh; Ling, Yong-Chien
2017-11-01
Fast foods and variety food items are populating among the food lovers. To improve the appearance of the food product in surviving gigantic competitive environment synthetic or natural food dyes are added to food items and beverages. Although regulatory bodies permit addition of natural colorants due to its safe and nontoxic nature in food, synthetic dyes are stringently controlled in all food products due to their toxicity by regulatory bodies. Artificial colors are need certification from the regulatory bodies for human consumption. To analyze food dyes in different food samples many analytical techniques are available like high pressure liquid chromatography (HPLC), thin layer chromatography (TLC), spectroscopic and gas chromatographic methods. However all these reported methods analyzed only synthetic dyes or natural dyes. Not a single method has analyzed both synthetic and natural dyes in a single run. In this study a robust ultra-performance liquid chromatographic method for simultaneous identification of 6 synthetic dyes (Tartrazine, Indigo carmine, Briliant blue, Fast green, malachite green, sunset yellow) and one natural dye (Na-Cu-Chlorophyllin) was developed using acquitic UPLC system equipped with Mass detector and acquity UPLC HSS T3 column (1.8 μm, 2.1 × 50 mm, 100Å). All the dyes were separated and their masses were determined through fragments’ masses analyses.
Magalhães, Elisângela Jaqueline; Ribeiro de Queiroz, Maria Eliana Lopes; Penido, Marcus Luiz de Oliveira; Paiva, Marco Antônio Ribeiro; Teodoro, Janaína Aparecida Reis; Augusti, Rodinei; Nascentes, Clésia Cristina
2013-09-27
A simple and efficient method was developed for the determination of cocaine in post-mortem samples of human liver via solid-liquid extraction with low temperature partitioning (SLE-LTP) and analysis by gas chromatography coupled to mass spectrometry (GC-MS). The extraction procedure was optimized by evaluating the influence of the following variables: pH of the extract, volume and composition of the extractor solvent, addition of a sorbent material (PSA: primary-secondary amine) and NaCl to clean up and increase the ionic strength of the extract. A bovine liver sample that was free of cocaine was used as a blank for the optimization of the SLE-LTP extraction procedure. The highest recovery was obtained when crushed bovine liver (2g) was treated with 2mL of ultrapure water plus 8mL of acetonitrile at physiological pH (7.4). The results also indicated no need for using PSA and NaCl. The complete analytical procedure was validated for the following figures of merit: selectivity, lower limit of quantification (LLOQ), calibration curve, recovery, precision and accuracy (for within-run and between-run experiments), matrix effect, dilution integrity and stability. The within-run and between-run precision (at four levels) varied from 2.1% to 9.4% and from 4.0% to 17.0%, respectively. A maximum deviation of 11.62% for the within-run and between-run accuracies in relation to the nominal concentrations was observed. Moreover, the LLOQ value for cocaine was 50.0ngg(-1) whereas no significant effects were noticed in the assays of dilution integrity and stability. To assess its overall performance, the optimized method was applied to the analysis of eight human liver samples collected from individuals who died due to the abusive consumption of cocaine. Due to the existence of a significant matrix effect, a blank human liver was used to construct a matrix-matched analytical curve. The concentrations of cocaine found in these samples ranged from 333.5 to 5969ngg(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Peng, Youyuan; Chu, Qingcui; Liu, Fanghua; Ye, Jiannong
2004-01-28
A simultaneous determination of trans-resveratrol, (-)-epicatechin, and (+)-catechin in red wine by capillary electrophoresis with electrochemical detection (CE-ED) is reported. The effects of the potential of the working electrode, pH and concentration of running buffer, separation voltage, and injection time on CE-ED were investigated. Under the optimum conditions, the analytes could be separated in a 100 mmol/L borate buffer (pH 9.2) within 20 min. A 300 microm diameter carbon disk electrode has a good response at +0.85 V (vs SCE) for all analytes. The response was linear over 3 orders of magnitude with detection limit (S/N = 3) ranging from 2 x 10(-7) to 5 x 10(-7) g/mL for all analytes. This method has been used for the determination of these analytes in red wine without enrichment, and the assay result was satisfactory.
Scaling exponents for ordered maxima
Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.
2015-12-22
We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less
Run-D.M.C.: A Mnemonic Aid for Explaining Mass Transfer in Electrochemical Systems
ERIC Educational Resources Information Center
Miles, Deon T.
2013-01-01
Electrochemistry is a significant area of analytical chemistry encompassing electrical measurements of chemical systems. The applications associated with electrochemistry appear in many aspects of everyday life: explaining how batteries work, how the human nervous system functions, and how metal corrosion occurs. The most common electrochemical…
Code of Federal Regulations, 2013 CFR
2013-07-01
... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...
Code of Federal Regulations, 2014 CFR
2014-07-01
... meters per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the...-8. Use GFAAS or ICP/MS for the analytical finish. Fugitive emissions from ash handling Visible...
Managing Offshore Branch Campuses: An Analytical Framework for Institutional Strategies
ERIC Educational Resources Information Center
Shams, Farshid; Huisman, Jeroen
2012-01-01
The aim of this article is to develop a framework that encapsulates the key managerial complexities of running offshore branch campuses. In the transnational higher education (TNHE) literature, several managerial ramifications and impediments have been addressed by scholars and practitioners. However, the strands of the literature are highly…
A high performance liquid chromatography (HPLC) method was developed to quantitatively determine phenolic compounds and their isomers in aqueous samples. The HPLC method can analyze a mixture of 15 contaminants in the same analytical run with an analysis time of 25 minutes. The...
Women Match Men when Learning a Spatial Skill
ERIC Educational Resources Information Center
Spence, Ian; Yu, Jingjie Jessica; Feng, Jing; Marshman, Jeff
2009-01-01
Meta-analytic studies have concluded that although training improves spatial cognition in both sexes, the male advantage generally persists. However, because some studies run counter to this pattern, a closer examination of the anomaly is warranted. The authors investigated the acquisition of a basic skill (spatial selective attention) using a…
A Performance-Based Method of Student Evaluation
ERIC Educational Resources Information Center
Nelson, G. E.; And Others
1976-01-01
The Problem Oriented Medical Record (which allows practical definition of the behavioral terms thoroughness, reliability, sound analytical sense, and efficiency as they apply to the identification and management of patient problems) provides a vehicle to use in performance based type evaluation. A test-run use of the record is reported. (JT)
A Characteristics Approach to the Evaluation of Economics Software Packages.
ERIC Educational Resources Information Center
Lumsden, Keith; Scott, Alex
1988-01-01
Utilizes Bloom's Taxonomy to identify elements of teacher and student interest. Depicts the way in which these interests are developed into characteristics for use in analytically evaluating software. Illustrates the use of this evaluating technique by appraising the much used software package "Running the British Economy." (KO)
Cash on Demand: A Framework for Managing a Cash Liquidity Position.
ERIC Educational Resources Information Center
Augustine, John H.
1995-01-01
A well-run college or university will seek to accumulate and maintain an appropriate cash reserve or liquidity position. A rigorous analytic process for estimating the size and cost of a liquidity position, based on judgments about the institution's operating risks and opportunities, is outlined. (MSE)
A high performance liquid chromatography (HPLC) method was developed to quantitatively determine phenolic compounds and their isomers in aqueous samples. The HPLC method can analyze a mixture of 15 contaminants in the same analytical run with an analysis time of 25 minutes. The...
Visualising Disability in the Past
ERIC Educational Resources Information Center
Devlieger, Patrick; Grosvenor, Ian; Simon, Frank; Van Hove, Geert; Vanobbergen, Bruno
2008-01-01
In recent years there has been a growth in interdisciplinary work which has argued that disability is not an isolated, individual medical pathology but instead a key defining social category like "race", class and gender. Seen in this way disability provides researchers with another analytic tool for exploring the nature of power. Running almost…
2013-01-01
Background Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials. Results The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Conclusions Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. PMID:23631706
Eckels, Josh; Nathe, Cory; Nelson, Elizabeth K; Shoemaker, Sara G; Nostrand, Elizabeth Van; Yates, Nicole L; Ashley, Vicki C; Harris, Linda J; Bollenbeck, Mark; Fong, Youyi; Tomaras, Georgia D; Piehler, Britt
2013-04-30
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists' capacity to use these immunoassays to evaluate human clinical trials. The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose-response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows. Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
The lepton+jets Selection and Determination of the Lepton Fake Rate with the Full RunIIb Data Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meister, Daniel
2013-01-01
This thesis presents the combined single top andmore » $$ t\\overline{ }\\ t $$ lepton+jets selection for the full RunIIb dataset of the DØ detector. The selection uses the newest soft- ware versions including all standard central object identifications and corrections and has various additions and improvements compared to the previous 7 . 3 fb - 1 $$ t\\overline{ }\\ t $$ selection and the previous single top selection in order to accommodate even more different analyses. The lepton fake rate $$\\epsilon_{\\rm QCD}$$ and the real lepton efficiency $$\\epsilon_{\\rm sig}$$ are estimated using the matrix method and different variations are considered in order to determine the systematic errors. The calculation has to be done for each run period and every set of analysis cuts separately. In addition the values for the exclusive jet bins and for the new single top analysis cuts have been derived and the thesis shows numerous control plots to demonstrate the excellent agreement between data and Monte Carlo.« less
The baseline serum value of α-amylase is a significant predictor of distance running performance.
Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Tarperi, Cantor; La Torre, Antonio; Guidi, Gian Cesare; Schena, Federico
2015-02-01
This study was planned to investigate whether serum α-amylase concentration may be associated with running performance, physiological characteristics and other clinical chemistry analytes in a large sample of recreational athletes undergoing distance running. Forty-three amateur runners successfully concluded a 21.1 km half-marathon at 75%-85% of their maximal oxygen uptake (VO2max). Blood was drawn during warm up and 15 min after conclusion of the run. After correction for body weight change, significant post-run increases were observed for serum values of alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, bilirubin, creatine kinase (CK), iron, lactate dehydrogenase (LDH), triglycerides, urea and uric acid, whereas the values of body weight, glomerular filtration rate, total and low density lipoprotein-cholesterol were significantly decreased. The concentration of serum α-amylase was unchanged. In univariate analysis, significant associations with running performance were found for gender, VO2max, training regimen and pre-run serum values of α-amylase, CK, glucose, high density lipoprotein-cholesterol, LDH, urea and uric acid. In multivariate analysis, only VO2max (p=0.042) and baseline α-amylase (p=0.021) remained significant predictors of running performance. The combination of these two variables predicted 71% of variance in running performance. The baseline concentration of serum α-amylase was positively correlated with variation of serum glucose during the trial (r=0.345; p=0.025) and negatively with capillary blood lactate at the end of the run (r=-0.352; p=0.021). We showed that the baseline serum α-amylase concentration significantly and independently predicts distance running performance in recreational runners.
Passos, Heloisa Moretti; Cieslarova, Zuzana; Simionato, Ana Valéria Colnaghi
2016-07-01
A separation method was developed in order to quantify free amino acids in passion fruit juices using CE-UV. A selective derivatization reaction with FMOC followed by MEKC analysis was chosen due to the highly interconnected mobilities of the analytes, enabling the separation of 22 amino acids by lipophilicity differences, as will be further discussed. To achieve such results, the method was optimized concerning BGE composition (concentrations, pH, and addition of organic modifier) and running conditions (temperature and applied voltage). The optimized running conditions were: a BGE composed by 60 mmol/L borate buffer at pH 10.1, 30 mmol/L SDS and 5 % methanol; running for 40 min at 23°C and 25 kV. The method was validated and applied on eight brands plus one fresh natural juice, detecting 12 amino acids. Quantification of six analytes combined with principal component analysis was capable to characterize different types of juices and showed potential to detect adulteration on industrial juices. Glutamic acid was found to be the most concentrated amino acid in all juices, exceeding 1 g/L in all samples and was also crucial for the correct classification of a natural juice, which presented a concentration of 22 g/L. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Determination of total and polycyclic aromatic hydrocarbons in aviation jet fuel.
Bernabei, M; Reda, R; Galiero, R; Bocchinfuso, G
2003-01-24
The aviation jet fuel widely used in turbine engine aircraft is manufactured from straight-run kerosene. The combustion quality of jet fuel is largely related to the hydrocarbon composition of the fuel itself; paraffins have better burning properties than aromatic compounds, especially naphthalenes and light polycyclic aromatic hydrocarbons (PAHs), which are characterised as soot and smoke producers. For this reason the burning quality of fuel is generally measured as smoke fermation. This evaluation is carried out with UV spectrophotometric determination of total naphthalene hydrocarbons and a chromatographic analysis to determine the total aromatic compounds. These methods can be considered insufficient to evaluate the human health impact of these compounds due to their inability to measure trace (ppm) amounts of each aromatic hyrcarbon and each PAH in accordance with limitations imposed because of their toxicological properties. In this paper two analytical methods are presented. Both are based on a gas chromatographic technique with a mass detector operating in be selected ion monitoring mode. The first method was able to determine more than 60 aromatic hydrocarbons in a fuel sample in a 35-min chromatographic run, while the second was able to carry out the analysis of more than 30 PAHs in a 40-min chromatographic run. The linearity and sensitivity of the methods in measuring these analytes at trace levels are described.
NASA Astrophysics Data System (ADS)
Basso, Stefano; Lazzaro, Gianluca; Schirmer, Mario; Botter, Gianluca
2014-05-01
River flows withdrawals to supply small run-of-river hydropower plants have been increasing significantly in recent years - particularly in the Alpine area - as a consequence of public incentives aimed at enhancing energy production from renewable sources. This growth further raised the anthropic pressure in areas traditionally characterized by an intense exploitation of water resources, thereby triggering social conflicts among local communities, hydropower investors and public authorities. This brought to the attention of scientists and population the urgency for novel and quantitative tools for assessing the hydrologic impact of these type of plants, and trading between economic interests and ecologic concerns. In this contribution we propose an analytical framework that allows for the estimate of the streamflow availability for hydropower production and the selection of the run-of-river plant capacity, as well as the assessment of the related profitability and environmental impacts. The method highlights the key role of the streamflow variability in the design process, by showing the significance control of the coefficient of variation of daily flows on the duration of the optimal capacity of small run-of-river plants. Moreover, the analysis evidences a gap between energy and economic optimizations, which may result in the under-exploitation of the available hydropower potential at large scales. The disturbances to the natural flow regime produced between the intake and the outflow of run-of-river power plants are also estimated within the proposed framework. The altered hydrologic regime, described through the probability distribution and the correlation function of streamflows, is analytically expressed as a function of the natural regime for different management strategies. The deviations from pristine conditions of a set of hydrologic statistics are used, jointly with an economic index, to compare environmental and economic outcomes of alternative plant setups and management strategies. Benefits connected to ecosystem services provided by unimpaired riverine environments can be also included in the analysis, possibly accounting for the disruptive effect of multiple run-of-river power plants built in cascade along the same river. The application to case studies in the Alpine region shows the potential of the tool to assess different management strategies and design solution, and to evaluate local and catchment scale impacts of small run-of-river hydropower development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, D. W.
This project will develop an analytical tool to calculate performance of HMX based PBXs in the skid test. The skid-test is used as a means to measure sensitivity for large charges in handling situations. Each series of skid tests requires dozens of drops of large billets. It is proposed that the reaction (or lack of one) of PBXs in the skid test is governed by the mechanical properties of the binder. If true, one might be able to develop an analytical tool to estimate skid test behavior for new PBX formulations. Others over the past 50 years have tried tomore » develop similar models. This project will research and summarize the works of others and couple the work of 3 into an analytical tool that can be run on a PC to calculate drop height of HMX based PBXs. Detonation due to dropping a billet is argued to be a dynamic thermal event. To avoid detonation, the heat created due to friction at impact, must be conducted into the charge or the target faster than the chemical kinetics can create additional energy. The methodology will involve numerically solving the Frank-Kamenetskii equation in one dimension. The analytical problem needs to be bounded in terms of how much heat is introduced to the billet and for how long. Assuming an inelastic collision with no rebound, the billet will be in contact with the target for a short duration determined by the equations of motion. For the purposes of the calculations, it will be assumed that if a detonation is to occur, it will transpire within that time. The surface temperature will be raised according to the friction created using the equations of motion of dropping the billet on a rigid surface. The study will connect the works of Charles Anderson, Alan Randolph, Larry Hatler, Alfonse Popolato, and Charles Mader into a single PC based analytic tool. Anderson's equations of motion will be used to calculate the temperature rise upon impact, the time this temperature is maintained (contact time) will be obtained from the work of Hatler et. al., and the reactive temperature rise will be obtained from Mader's work. Finally, the assessment of when a detonation occurs will be derived from Bowden and Yoffe's thermal explosion theory (hot spot).« less
DOT National Transportation Integrated Search
1995-09-05
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report documents the RORSIM comput...
DOT National Transportation Integrated Search
1995-08-01
INTELLIGENT VEHICLE INITIATIVE OR IVI : THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. :...
Run-Off-Road Collision Avoidance Countermeasures Using IVHS Countermeasures: Task 3, Volume 1
DOT National Transportation Integrated Search
1995-08-23
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity oi these crashes. This report describes the findings of the...
Run-Off-Road Collision Avoidance Countermeasures Using IVHS Countermeasures Task 3 - Volume 2
DOT National Transportation Integrated Search
1995-08-23
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report describes the findings of t...
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Stankovich, Joseph J; Gritti, Fabrice; Stevenson, Paul G; Beaver, Lois Ann; Guiochon, Georges
2014-01-10
Using a column packed with fully porous particles, four methods for controlling the flow rates at which gradient elution runs are conducted in very high pressure liquid chromatography (VHPLC) were tested to determine whether reproducible thermal conditions could be achieved, such that subsequent analyses would proceed at nearly the same initial temperature. In VHPLC high flow rates are achieved, producing fast analyses but requiring high inlet pressures. The combination of high flow rates and high inlet pressures generates local heat, leading to temperature changes in the column. Usually in this case a post-run time is input into the analytical method to allow the return of the column temperature to its initial state. An alternative strategy involves operating the column without a post-run equilibration period and maintaining constant temperature variations for subsequent analysis after conducting one or a few separations to bring the column to a reproducible starting temperature. A liquid chromatography instrument equipped with a pressure controller was used to perform constant pressure and constant flow rate VHPLC separations. Six replicate gradient separations of a nine component mixture consisting of acetophenone, propiophenone, butyrophenone, valerophenone, hexanophenone, heptanophenone, octanophenone, benzophenone, and acetanilide dissolved in water/acetonitrile (65:35, v/v) were performed under various experimental conditions: constant flow rate, two sets of constant pressure, and constant pressure operation with a programmed flow rate. The relative standard deviations of the response factors for all the analytes are lower than 5% across the methods. Programming the flow rate to maintain a fairly constant pressure instead of using instrument controlled constant pressure improves the reproducibility of the retention times by a factor of 5, when plotting the chromatograms in time. Copyright © 2013 Elsevier B.V. All rights reserved.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
Albrecht, Simone; Mittermayr, Stefan; Smith, Josh; Martín, Silvia Millán; Doherty, Margaret; Bones, Jonathan
2017-01-01
Quantitative glycomics represents an actively expanding research field ranging from the discovery of disease-associated glycan alterations to the quantitative characterization of N-glycans on therapeutic proteins. Commonly used analytical platforms for comparative relative quantitation of complex glycan samples include MALDI-TOF-MS or chromatographic glycan profiling with subsequent data alignment and statistical evaluation. Limitations of such approaches include run-to-run technical variation and the potential introduction of subjectivity during data processing. Here, we introduce an offline 2D LC-MS E workflow for the fractionation and relative quantitation of twoplex isotopically labeled N-linked oligosaccharides using neutral 12 C 6 and 13 C 6 aniline (Δmass = 6 Da). Additional linkage-specific derivatization of sialic acids using 4-(4,6-dimethoxy-1,3,5-trizain-2-yl)-4-methylmorpholinium chloride offered simultaneous and advanced in-depth structural characterization. The potential of the method was demonstrated for the differential analysis of structurally defined N-glycans released from serum proteins of patients diagnosed with various stages of colorectal cancer. The described twoplex 12 C 6 / 13 C 6 aniline 2D LC-MS platform is ideally suited for differential glycomic analysis of structurally complex N-glycan pools due to combination and analysis of samples in a single LC-MS injection and the associated minimization in technical variation. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Guo, Zhening; Chen, Yangsheng; Ding, Xiaoliang; Huang, Chenrong; Miao, Liyan
2016-11-01
A rapid, selective and sensitive liquid chromatography-tandem mass spectrometry assay method was developed for simultaneous determination of ambroxol and salbutamol in human plasma using citalopram hydrobromide as internal standard (IS). The sample was alkalinized with ammonia water (33:67, v/v) and extracted by single liquid-liquid extraction with ethyl acetate. Separation was achieved on Waters Acquity UPLC BEH C 18 column using a gradient program at a flow rate of 0.2 mL/min. Detection was performed using electrospray ionization in positive ion multiple reaction monitoring mode by monitoring the ion transitions m/z 378.9 → 263.6 (ambroxol), m/z 240.2 → 147.7 (salbutamol) and m/z 325.0 → 261.7 (IS). The total analytical run time was relatively short (3 min). Calibration curves were linear in the concentration range of 0.5-100.0 ng/mL for ambroxol and 0.2-20.0 ng/mL for salbutamol, with intra- and inter-run precision (relative standard deviation) <15% and accuracy (relative error) ranging from 97.7 to 112.1% for ambroxol and from 94.5 to 104.1% for salbutamol. The method was successfully applied in a clinical pharmacokinetic study of the compound ambroxol and salbutamol tablets. Copyright © 2016 John Wiley & Sons, Ltd.
Parametric Study of Carbon Nanotube Production by Laser Ablation Process
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram; Nikolaev, Pavel; Holmes, William; Hadjiev, Victor; Scott, Carl
2002-01-01
Carbon nanotubes form a new class of nanomaterials that are presumed to have extraordinary mechanical, electrical and thermal properties. The single wall nanotubes (SWNTs) are estimated to be 100 times stronger than steel with 1/6th the weight; electrical carrying capacity better than copper and thermal conductivity better than diamond. Applications of these SWNTs include possible weight reduction of aerospace structures, multifunctional materials, nanosensors and nanoelectronics. Double pulsed laser vaporization process produces SWNTs with the highest percentage of nanotubes in the output material. The normal operating conditions include a green laser pulse closely followed by an infrared laser pulse. Lasers ab late a metal-containing graphite target located in a flow tube maintained in an oven at 1473K with argon flow of 100 sccm at a 500 Torr pressure. In the present work a number of production runs were carried out, changing one operating condition at a time. We have studied the effects of nine parameters, including the sequencing of the laser pulses, pulse separation times, laser energy densities, the type of buffer gas used, oven temperature, operating pressure, flow rate and inner flow tube diameters. All runs were done using the same graphite target. The collected nanotube material was characterized by a variety of analytical techniques including scanning electron microscopy (SEM), transmission electron microscopy (TEM), Raman and thermo gravimetric analysis (TGA). Results indicate trends that could be used to optimize the process and increase the efficiency of the production process.
Single-Cell Detection of Secreted Aβ and sAPPα from Human IPSC-Derived Neurons and Astrocytes.
Liao, Mei-Chen; Muratore, Christina R; Gierahn, Todd M; Sullivan, Sarah E; Srikanth, Priya; De Jager, Philip L; Love, J Christopher; Young-Pearse, Tracy L
2016-02-03
Secreted factors play a central role in normal and pathological processes in every tissue in the body. The brain is composed of a highly complex milieu of different cell types and few methods exist that can identify which individual cells in a complex mixture are secreting specific analytes. By identifying which cells are responsible, we can better understand neural physiology and pathophysiology, more readily identify the underlying pathways responsible for analyte production, and ultimately use this information to guide the development of novel therapeutic strategies that target the cell types of relevance. We present here a method for detecting analytes secreted from single human induced pluripotent stem cell (iPSC)-derived neural cells and have applied the method to measure amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα), analytes central to Alzheimer's disease pathogenesis. Through these studies, we have uncovered the dynamic range of secretion profiles of these analytes from single iPSC-derived neuronal and glial cells and have molecularly characterized subpopulations of these cells through immunostaining and gene expression analyses. In examining Aβ and sAPPα secretion from single cells, we were able to identify previously unappreciated complexities in the biology of APP cleavage that could not otherwise have been found by studying averaged responses over pools of cells. This technique can be readily adapted to the detection of other analytes secreted by neural cells, which would have the potential to open new perspectives into human CNS development and dysfunction. We have established a technology that, for the first time, detects secreted analytes from single human neurons and astrocytes. We examine secretion of the Alzheimer's disease-relevant factors amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα) and present novel findings that could not have been observed without a single-cell analytical platform. First, we identify a previously unappreciated subpopulation that secretes high levels of Aβ in the absence of detectable sAPPα. Further, we show that multiple cell types secrete high levels of Aβ and sAPPα, but cells expressing GABAergic neuronal markers are overrepresented. Finally, we show that astrocytes are competent to secrete high levels of Aβ and therefore may be a significant contributor to Aβ accumulation in the brain. Copyright © 2016 the authors 0270-6474/16/361730-17$15.00/0.
Effect of Footwear on Dynamic Stability during Single-leg Jump Landings.
Bowser, Bradley J; Rose, William C; McGrath, Robert; Salerno, Jilian; Wallace, Joshua; Davis, Irene S
2017-06-01
Barefoot and minimal footwear running has led to greater interest in the biomechanical effects of different types of footwear. The effect of running footwear on dynamic stability is not well understood. The purpose of this study was to compare dynamic stability and impact loading across 3 footwear conditions; barefoot, minimal footwear and standard running shoes. 25 injury free runners (21 male, 4 female) completed 5 single-leg jump landings in each footwear condition. Dynamic stability was assessed using the dynamic postural stability index and its directional components (mediolateral, anteroposterior, vertical). Peak vertical ground reaction force and vertical loadrates were also compared across footwear conditions. Dynamic stability was dependent on footwear type for all stability indices (ANOVA, p<0.05). Post-hoc tests showed dynamic stability was greater when barefoot than in running shoes for each stability index (p<0.02) and greater than minimal footwear for the anteroposterior stability index (p<0.01). Peak vertical force and average loadrates were both dependent on footwear (p≤0.05). Dynamic stability, peak vertical force, and average loadrates during single-leg jump landings appear to be affected by footwear type. The results suggest greater dynamic stability and lower impact loading when landing barefoot or in minimal footwear. © Georg Thieme Verlag KG Stuttgart · New York.
Working Towards New Transformative Geoscience Analytics Enabled by Petascale Computing
NASA Astrophysics Data System (ADS)
Woodcock, R.; Wyborn, L.
2012-04-01
Currently the top 10 supercomputers in the world are petascale and already exascale computers are being planned. Cloud computing facilities are becoming mainstream either as private or commercial investments. These computational developments will provide abundant opportunities for the earth science community to tackle the data deluge which has resulted from new instrumentation enabling data to be gathered at a greater rate and at higher resolution. Combined, the new computational environments should enable the earth sciences to be transformed. However, experience in Australia and elsewhere has shown that it is not easy to scale existing earth science methods, software and analytics to take advantage of the increased computational capacity that is now available. It is not simply a matter of 'transferring' current work practices to the new facilities: they have to be extensively 'transformed'. In particular new Geoscientific methods will need to be developed using advanced data mining, assimilation, machine learning and integration algorithms. Software will have to be capable of operating in highly parallelised environments, and will also need to be able to scale as the compute systems grow. Data access will have to improve and the earth science community needs to move from the file discovery, display and then locally download paradigm to self describing data cubes and data arrays that are available as online resources from either major data repositories or in the cloud. In the new transformed world, rather than analysing satellite data scene by scene, sensor agnostic data cubes of calibrated earth observation data will enable researchers to move across data from multiple sensors at varying spatial data resolutions. In using geophysics to characterise basement and cover, rather than analysing individual gridded airborne geophysical data sets, and then combining the results, petascale computing will enable analysis of multiple data types, collected at varying resolutions with integration and validation across data type boundaries. Increased capacity of storage and compute will mean that uncertainty and reliability of individual observations will consistently be taken into account and propagated throughout the processing chain. If these data access difficulties can be overcome, the increased compute capacity will also mean that larger scale, more complex models can be run at higher resolution and instead of single pass modelling runs. Ensembles of models will be able to be run to simultaneously test multiple hypotheses. Petascale computing and high performance data offer more than "bigger, faster": it is an opportunity for a transformative change in the way in which geoscience research is routinely conducted.
Knowledge Data Base for Amorphous Metals
2007-07-26
not programmatic, updates. Over 100 custom SQL statements that maintain the domain specific data are attached to the workflow entries in a generic...for the form by populating the SQL and run generation tables. Application data may be prepared in different ways for two steps that invoke the same form...run generation mode). There is a single table of SQL commands. Each record has a user-definable ID, the SQL code, and a comment. The run generation
NASA Technical Reports Server (NTRS)
Whipple, R. D.
1980-01-01
The potential effectiveness of rockets as an auxiliary means for an aircraft to effect recovery from spins was investigated. The advances in rocket technology produced by the space effort suggested that currently available systems might obviate many of the problems encountered in earlier rocket systems. A modern fighter configuration known to exhibit a flat spin mode was selected. An analytical study was made of the thrust requirements for a rocket spin recovery system for the subject configuration. These results were then applied to a preliminary systems study of rocket components appropriate to the problem. Subsequent spin tunnel tests were run to evaluate the analytical results.
A Simplified Shuttle Payload Thermal Analyzer /SSPTA/ program
NASA Technical Reports Server (NTRS)
Bartoszek, J. T.; Huckins, B.; Coyle, M.
1979-01-01
A simple thermal analysis program for Space Shuttle payloads has been developed to accommodate the user who requires an easily understood but dependable analytical tool. The thermal analysis program includes several thermal subprograms traditionally employed in spacecraft thermal studies, a data management system for data generated by the subprograms, and a master program to coordinate the data files and thermal subprograms. The language and logic used to run the thermal analysis program are designed for the small user. In addition, analytical and storage techniques which conserve computer time and minimize core requirements are incorporated into the program.
DOT National Transportation Integrated Search
1994-10-28
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. This report describes and documents the a...
DOT National Transportation Integrated Search
1994-10-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DESCRIBES AND DOCUMENTS ...
DOT National Transportation Integrated Search
1995-06-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DESCRIBES AND DOCUMENTS ...
DOT National Transportation Integrated Search
1995-09-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DOCUMENTS THE RORSIM COM...
DOT National Transportation Integrated Search
1994-10-28
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. This report contains a summary of data us...
NASA Astrophysics Data System (ADS)
Li, Chuan-Yao; Huang, Hai-Jun; Tang, Tie-Qiao
2017-03-01
This paper investigates the traffic flow dynamics under the social optimum (SO) principle in a single-entry traffic corridor with staggered shifts from the analytical and numerical perspectives. The LWR (Lighthill-Whitham and Richards) model and the Greenshield's velocity-density function are utilized to describe the dynamic properties of traffic flow. The closed-form SO solution is analytically derived and some numerical examples are used to further testify the analytical solution. The optimum proportion of the numbers of commuters with different desired arrival times is further discussed, where the analytical and numerical results both indicate that the cumulative outflow curve under the SO principle is piecewise smooth.
USDA-ARS?s Scientific Manuscript database
Enzyme-linked immunosorbent assays (ELISAs) usually focus on the detection of a single analyte or a single group of analytes, e.g., fluoroquinolones or sulfonamides. However, it is often necessary to simultaneously monitor the two classes of antimicrobial residues in different food matrices. In th...
Quirino, J P; Terabe, S
2000-01-01
A simple and effective way to improve detection sensitivity of positively chargeable analytes in capillary zone electrophoresis more than 100-fold is described. Cationic species were made to migrate toward the cathode even under reversed electroosmotic flow caused by a cationic surfactant by using a low pH run buffer. For the first time, with such a configuration, large volume sample stacking of cationic analytes is achieved without a polarity-switching step and loss of efficiency. Samples are prepared in water or aqueous acetonitrile. Aromatic amines and a variety of drugs were concentrated using background solutions containing phosphoric acid and cetyltrimethylammonium bromide. Qualitative and quantitative aspects are also investigated.
EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen
With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less
CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme
NASA Astrophysics Data System (ADS)
Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.
2017-10-01
LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.
Google Analytics: Single Page Traffic Reports
These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 CFR part 60, appendix A-8). Use GFAAS or ICP/MS for the analytical finish. Lead 0.00062 milligrams... per run) Performance test (Method 29 at 40 CFR part 60, appendix A-8. Use GFAAS or ICP/MS for the...
Running into Trouble with the Time-Dependent Propagation of a Wavepacket
ERIC Educational Resources Information Center
Garriz, Abel E.; Sztrajman, Alejandro; Mitnik, Dario
2010-01-01
The propagation in time of a wavepacket is a conceptually rich problem suitable to be studied in any introductory quantum mechanics course. This subject is covered analytically in most of the standard textbooks. Computer simulations have become a widespread pedagogical tool, easily implemented in computer labs and in classroom demonstrations.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... criteria Analytic design plan includes: selecting sample based on criteria and running descriptive... to behavioral health treatment and recovery. The National Survey on Drug Use and Health estimates... ``Stay Covered Challenge'' calls for the development of a marketing/outreach campaign designed for use by...
Rethinking Intensive Quantities via Guided Mediated Abduction
ERIC Educational Resources Information Center
Abrahamson, Dor
2012-01-01
Some intensive quantities, such as slope, velocity, or likelihood, are perceptually privileged in the sense that they are experienced as holistic, irreducible sensations. However, the formal expression of these quantities uses "a/b" analytic metrics; for example, the slope of a line is the quotient of its rise and run. Thus, whereas students'…
An automated real-time free phenytoin assay to replace the obsolete Abbott TDx method.
Williams, Christopher; Jones, Richard; Akl, Pascale; Blick, Kenneth
2014-01-01
Phenytoin is a commonly used anticonvulsant that is highly protein bound with a narrow therapeutic range. The unbound fraction, free phenytoin (FP), is responsible for pharmacologic effects; therefore, it is essential to measure both FP and total serum phenytoin levels. Historically, the Abbott TDx method has been widely used for the measurement of FP and was the method used in our laboratory. However, the FP TDx assay was recently discontinued by the manufacturer, so we had to develop an alternative methodology. We evaluated the Beckman-Coulter DxC800 based FP method for linearity, analytical sensitivity, and precision. The analytical measurement range of the method was 0.41 to 5.30 microg/mL. Within-run and between-run precision studies yielded CVs of 3.8% and 5.5%, respectively. The method compared favorably with the TDx method, yielding the following regression equation: DxC800 = 0.9**TDx + 0.10; r2 = 0.97 (n = 97). The new FP assay appears to be an acceptable alternative to the TDx method.
π-Extended triptycene-based material for capillary gas chromatographic separations.
Yang, Yinhui; Wang, Qinsi; Qi, Meiling; Huang, Xuebin
2017-10-02
Triptycene-based materials feature favorable physicochemical properties and unique molecular recognition ability that offer good potential as stationary phases for capillary gas chromatography (GC). Herein, we report the investigation of utilizing a π-extended triptycene material (denoted as TQPP) for GC separations. As a result, the TQPP capillary column exhibited high column efficiency of 4030 plates m -1 and high-resolution performance for a wide range of analytes, especially structural and positional isomers. Interestingly, the TQPP stationary phase showed unique shape selectivity for alkanes isomers and preferential retention for analytes with halogen atoms and H-bonding nature mainly through their halogen-bonding and H-bonding interactions. In addition, the TQPP column had good repeatability and reproducibility with the RSD values of 0.02-0.34% for run-to-run, 0.09-0.80% for day-to-day and 1.4-5.2% for column-to-column, respectively, and favorable thermal stability up to 280 °C. This work demonstrates the promising future of triptycene-based materials as a new class of stationary phases for GC separations. Copyright © 2017 Elsevier B.V. All rights reserved.
Is Single-Port Laparoscopy More Precise and Faster with the Robot?
Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D
2016-11-01
Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.
Jin, Chunfen; Viidanoja, Jyrki
2017-01-15
Existing liquid chromatography - mass spectrometry method for the analysis of short chain carboxylic acids was expanded and validated to cover also the measurement of glycerol from oils and fats. The method employs chloride anion attachment and two ions, [glycerol+ 35 Cl] - and [glycerol+ 37 Cl] - , as alternative quantifiers for improved selectivity of glycerol measurement. The averaged within run precision, between run precision and accuracy ranged between 0.3-7%, 0.4-6% and 94-99%, respectively, depending on the analyte ion and sample matrix. Selected renewable diesel feedstocks were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haxton, Wick; Lunardini, Cecilia
2008-09-01
Semi-leptonic electroweak interactions in nuclei—such as β decay, μ capture, charged- and neutral-current neutrino reactions, and electron scattering—are described by a set of multipole operators carrying definite parity and angular momentum, obtained by projection from the underlying nuclear charge and three-current operators. If these nuclear operators are approximated by their one-body forms and expanded in the nucleon velocity through order |p→|/M, where p→ and M are the nucleon momentum and mass, a set of seven multipole operators is obtained. Nuclear structure calculations are often performed in a basis of Slater determinants formed from harmonic oscillator orbitals, a choice that allows translational invariance to be preserved. Harmonic-oscillator single-particle matrix elements of the multipole operators can be evaluated analytically and expressed in terms of finite polynomials in q, where q is the magnitude of the three-momentum transfer. While results for such matrix elements are available in tabular form, with certain restriction on quantum numbers, the task of determining the analytic form of a response function can still be quite tedious, requiring the folding of the tabulated matrix elements with the nuclear density matrix, and subsequent algebra to evaluate products of operators. Here we provide a Mathematica script for generating these matrix elements, which will allow users to carry out all such calculations by symbolic manipulation. This will eliminate the errors that may accompany hand calculations and speed the calculation of electroweak nuclear cross sections and rates. We illustrate the use of the new script by calculating the cross sections for charged- and neutral-current neutrino scattering in 12C. Program summaryProgram title: SevenOperators Catalogue identifier: AEAY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2227 No. of bytes in distributed program, including test data, etc.: 19 382 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer running Mathematica; tested on Mac OS X PowerPC (32-bit) running Mathematica 6.0.0 Operating system: Any running Mathematica RAM: Memory requirements determined by Mathematica; 512 MB or greater RAM and hard drive space of at least 3.0 GB recommended Classification: 17.16, 17.19 Nature of problem: Algebraic evaluation of harmonic oscillator nuclear matrix elements for the one-body multipole operators governing semi-leptonic weak interactions, such as charged- or neutral-current neutrino scattering off nuclei. Solution method: Mathematica evaluation of associated angular momentum algebra and spherical Bessel function radial integrals. Running time: Depends on the complexity of the one-body density matrix employed, but times of a few seconds are typical.
Dairy Analytics and Nutrient Analysis (DANA) Prototype System User Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sam Alessi; Dennis Keiser
2012-10-01
This document is a user manual for the Dairy Analytics and Nutrient Analysis (DANA) model. DANA provides an analysis of dairy anaerobic digestion technology and allows users to calculate biogas production, co-product valuation, capital costs, expenses, revenue and financial metrics, for user customizable scenarios, dairy and digester types. The model provides results for three anaerobic digester types; Covered Lagoons, Modified Plug Flow, and Complete Mix, and three main energy production technologies; electricity generation, renewable natural gas generation, and compressed natural gas generation. Additional options include different dairy types, bedding types, backend treatment type as well as numerous production, and economicmore » parameters. DANA’s goal is to extend the National Market Value of Anaerobic Digester Products analysis (informa economics, 2012; Innovation Center, 2011) to include a greater and more flexible set of regional digester scenarios and to provide a modular framework for creation of a tool to support farmer and investor needs. Users can set up scenarios from combinations of existing parameters or add new parameters, run the model and view a variety of reports, charts and tables that are automatically produced and delivered over the web interface. DANA is based in the INL’s analysis architecture entitled Generalized Environment for Modeling Systems (GEMS) , which offers extensive collaboration, analysis, and integration opportunities and greatly speeds the ability construct highly scalable web delivered user-oriented decision tools. DANA’s approach uses server-based data processing and web-based user interfaces, rather a client-based spreadsheet approach. This offers a number of benefits over the client-based approach. Server processing and storage can scale up to handle a very large number of scenarios, so that analysis of county, even field level, across the whole U.S., can be performed. Server based databases allow dairy and digester parameters be held and managed in a single managed data repository, while allows users to customize standard values and perform individual analysis. Server-based calculations can be easily extended, versions and upgrades managed, and any changes are immediately available to all users. This user manual describes how to use and/or modify input database tables, run DANA, view and modify reports.« less
Panos, Joseph A.; Hoffman, Joshua T.; Wordeman, Samuel C.; Hewett, Timothy E.
2016-01-01
Background Correction of neuromuscular impairments after anterior cruciate ligament injury is vital to successful return to sport. Frontal plane knee control during landing is a common measure of lower-extremity neuromuscular control and asymmetries in neuromuscular control of the knee can predispose injured athletes to additional injury and associated morbidities. Therefore, this study investigated the effects of anterior cruciate ligament injury on knee biomechanics during landing. Methods Two-dimensional frontal plane video of single leg drop, cross over drop, and drop vertical jump dynamic movement trials was analyzed for twenty injured and reconstructed athletes. The position of the knee joint center was tracked in ImageJ software for 500 milliseconds after landing to calculate medio-lateral knee motion velocities and determine normal fluency, the number of times per second knee velocity changed direction. The inverse of this calculation, analytical fluency, was used to associate larger numerical values with fluent movement. Findings Analytical fluency was decreased in involved limbs for single leg drop trials (P=0.0018). Importantly, analytical fluency for single leg drop differed compared to cross over drop trials for involved (P<0.001), but not uninvolved limbs (P=0.5029). For involved limbs, analytical fluency values exhibited a stepwise trend in relative magnitudes. Interpretation Decreased analytical fluency in involved limbs is consistent with previous studies. Fluency asymmetries observed during single leg drop tasks may be indicative of abhorrent landing strategies in the involved limb. Analytical fluency differences in unilateral tasks for injured limbs may represent neuromuscular impairment as a result of injury. PMID:26895446
How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.
Wiczling, Paweł; Kaliszan, Roman
2016-01-05
In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.
NASA Astrophysics Data System (ADS)
Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.
2016-12-01
Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.
NASA Astrophysics Data System (ADS)
Chevalier, Paul; Piccardo, Marco; Anand, Sajant; Mejia, Enrique A.; Wang, Yongrui; Mansuripur, Tobias S.; Xie, Feng; Lascola, Kevin; Belyanin, Alexey; Capasso, Federico
2018-02-01
Free-running Fabry-Perot lasers normally operate in a single-mode regime until the pumping current is increased beyond the single-mode instability threshold, above which they evolve into a multimode state. As a result of this instability, the single-mode operation of these lasers is typically constrained to few percents of their output power range, this being an undesired limitation in spectroscopy applications. In order to expand the span of single-mode operation, we use an optical injection seed generated by an external-cavity single-mode laser source to force the Fabry-Perot quantum cascade laser into a single-mode state in the high current range, where it would otherwise operate in a multimode regime. Utilizing this approach, we achieve single-mode emission at room temperature with a tuning range of 36 cm-1 and stable continuous-wave output power exceeding 1 W at 4.5 μm. Far-field measurements show that a single transverse mode is emitted up to the highest optical power, indicating that the beam properties of the seeded Fabry-Perot laser remain unchanged as compared to free-running operation.
Nine-analyte detection using an array-based biosensor
NASA Technical Reports Server (NTRS)
Taitt, Chris Rowe; Anderson, George P.; Lingerfelt, Brian M.; Feldstein, s. Mark. J.; Ligler, Frances S.
2002-01-01
A fluorescence-based multianalyte immunosensor has been developed for simultaneous analysis of multiple samples. While the standard 6 x 6 format of the array sensor has been used to analyze six samples for six different analytes, this same format has the potential to allow a single sample to be tested for 36 different agents. The method described herein demonstrates proof of principle that the number of analytes detectable using a single array can be increased simply by using complementary mixtures of capture and tracer antibodies. Mixtures were optimized to allow detection of closely related analytes without significant cross-reactivity. Following this facile modification of patterning and assay procedures, the following nine targets could be detected in a single 3 x 3 array: Staphylococcal enterotoxin B, ricin, cholera toxin, Bacillus anthracis Sterne, Bacillus globigii, Francisella tularensis LVS, Yersiniapestis F1 antigen, MS2 coliphage, and Salmonella typhimurium. This work maximizes the efficiency and utility of the described array technology, increasing only reagent usage and cost; production and fabrication costs are not affected.
NASA Astrophysics Data System (ADS)
Garnache, Arnaud; Myara, Mikhaël.; Laurain, A.; Bouchier, Aude; Perez, J. P.; Signoret, P.; Sagnes, I.; Romanini, D.
2017-11-01
We present a highly coherent semiconductor laser device formed by a ½-VCSEL structure and an external concave mirror in a millimetre high finesse stable cavity. The quantum well structure is diode-pumped by a commercial single mode GaAs laser diode system. This free running low noise tunable single-frequency laser exhibits >50mW output power in a low divergent circular TEM00 beam with a spectral linewidth below 1kHz and a relative intensity noise close to the quantum limit. This approach ensures, with a compact design, homogeneous gain behaviour and a sufficiently long photon lifetime to reach the oscillation-relaxation-free class-A regime, with a cut off frequency around 10MHz.
ERIC Educational Resources Information Center
Field, Christopher Ryan
2009-01-01
Developments in analytical chemistry were made using acoustically levitated small volumes of liquid to study enzyme reaction kinetics and by detecting volatile organic compounds in the gas phase using single-walled carbon nanotubes. Experience gained in engineering, electronics, automation, and software development from the design and…
Hartmann, Anja; Becker, Kathrin; Karsten, Ulf; Remias, Daniel; Ganzera, Markus
2015-01-01
Mycosporine-like amino acids (MAAs), a group of small secondary metabolites found in algae, cyanobacteria, lichens and fungi, have become ecologically and pharmacologically relevant because of their pronounced UV-absorbing and photo-protective potential. Their analytical characterization is generally achieved by reversed phase HPLC and the compounds are often quantified based on molar extinction coefficients. As an alternative approach, in our study a fully validated hydrophilic interaction liquid chromatography (HILIC) method is presented. It enables the precise quantification of several analytes with adequate retention times in a single run, and can be coupled directly to MS. Excellent linear correlation coefficients (R2 > 0.9991) were obtained, with limit of detection (LOD) values ranging from 0.16 to 0.43 µg/mL. Furthermore, the assay was found to be accurate (recovery rates from 89.8% to 104.1%) and precise (intra-day precision: 5.6%, inter-day precision ≤6.6%). Several algae were assayed for their content of known MAAs like porphyra-334, shinorine, and palythine. Liquid chromatography-mass spectrometry (LC-MS) data indicated a novel compound in some of them, which could be isolated from the marine species Catenella repens and structurally elucidated by nuclear magnetic resonance spectroscopy (NMR) as (E)-3-hydroxy-2-((5-hydroxy-5-(hydroxymethyl)-2-methoxy-3-((2-sulfoethyl)amino)cyclohex-2-en-1-ylidene)amino) propanoic acid, a novel MAA called catenelline. PMID:26473886
Hartmann, Anja; Becker, Kathrin; Karsten, Ulf; Remias, Daniel; Ganzera, Markus
2015-10-09
Mycosporine-like amino acids (MAAs), a group of small secondary metabolites found in algae, cyanobacteria, lichens and fungi, have become ecologically and pharmacologically relevant because of their pronounced UV-absorbing and photo-protective potential. Their analytical characterization is generally achieved by reversed phase HPLC and the compounds are often quantified based on molar extinction coefficients. As an alternative approach, in our study a fully validated hydrophilic interaction liquid chromatography (HILIC) method is presented. It enables the precise quantification of several analytes with adequate retention times in a single run, and can be coupled directly to MS. Excellent linear correlation coefficients (R² > 0.9991) were obtained, with limit of detection (LOD) values ranging from 0.16 to 0.43 µg/mL. Furthermore, the assay was found to be accurate (recovery rates from 89.8% to 104.1%) and precise (intra-day precision: 5.6%, inter-day precision ≤6.6%). Several algae were assayed for their content of known MAAs like porphyra-334, shinorine, and palythine. Liquid chromatography-mass spectrometry (LC-MS) data indicated a novel compound in some of them, which could be isolated from the marine species Catenella repens and structurally elucidated by nuclear magnetic resonance spectroscopy (NMR) as (E)-3-hydroxy-2-((5-hydroxy-5-(hydroxymethyl)-2-methoxy-3-((2-sulfoethyl)amino)cyclohex-2-en-1-ylidene)amino) propanoic acid, a novel MAA called catenelline.
Aeroelastic Optimization Study Based on the X-56A Model
NASA Technical Reports Server (NTRS)
Li, Wesley W.; Pak, Chan-Gi
2014-01-01
One way to increase the aircraft fuel efficiency is to reduce structural weight while maintaining adequate structural airworthiness, both statically and aeroelastically. A design process which incorporates the object-oriented multidisciplinary design, analysis, and optimization (MDAO) tool and the aeroelastic effects of high fidelity finite element models to characterize the design space was successfully developed and established. This paper presents two multidisciplinary design optimization studies using an object-oriented MDAO tool developed at NASA Armstrong Flight Research Center. The first study demonstrates the use of aeroelastic tailoring concepts to minimize the structural weight while meeting the design requirements including strength, buckling, and flutter. Such an approach exploits the anisotropic capabilities of the fiber composite materials chosen for this analytical exercise with ply stacking sequence. A hybrid and discretization optimization approach improves accuracy and computational efficiency of a global optimization algorithm. The second study presents a flutter mass balancing optimization study for the fabricated flexible wing of the X-56A model since a desired flutter speed band is required for the active flutter suppression demonstration during flight testing. The results of the second study provide guidance to modify the wing design and move the design flutter speeds back into the flight envelope so that the original objective of X-56A flight test can be accomplished successfully. The second case also demonstrates that the object-oriented MDAO tool can handle multiple analytical configurations in a single optimization run.
Noubarani, Maryam; Keyhanfar, Fariborz; Motevalian, Manijeh; Mahmoudian, Masoud
2010-01-01
To develop a simple and rapid HPLC method for measuring of four proton-pump inhibitors (PPIs), omeprazole (OPZ), pantoprazole (PPZ), lansoprazole (LPZ) and rabeprazole (RPZ) concentrations in human plasma. Following a single step liquid-liquid extraction analytes along with an internal standard (IS) were separated using an isocratic mobile phase of phosphate buffer (10 mM)/acetonitrile (53/47, v/v adjusted pH to 7.3 with triethylamine) at flow rate of 1 mL/min on reverse phase TRACER EXCEL 120 ODS-A column at room temperature. Total analytical run time for selected PPIs was 10 min. The assays exhibited good linearity (r(2)>0.99) over the studied range of 20 to 2500 ng/mL for OPZ, 20 to 4000 ng/mL for PPZ, 20 to 3000 ng/mL for LPZ and 20 to 1500 ng/mL for RPZ. The recovery of method was equal or greater than 80% and lower limit of quantification (LLOQ) was 20 ng/mL for four PPIs. Coefficient of variation and error at all of the intra-day and inter-day assessment were less than 9.2% for all compounds. The results indicated that this method is a simple, rapid, precise and accurate assay for determination of four PPIs concentrations in human plasma. This validated method is sensitive and reproducible enough to be used in pharmacokinetic studies and also is time- and cost-benefit when selected PPIs are desired to be analyzed.
Zhang, Xiaokai; Zhang, Song; Yang, Qian; Cao, Wei; Xie, Yanhua; Qiu, Pengcheng; Wang, Siwang
2015-01-01
Background: Yuanhu Zhitong prescription (YZP) is a famous traditional Chinese medicine formula, which is officially recorded in Chinese Pharmacopoeia for the treatment of stomach pain, hypochondriac pain, headache and dysmenorrhea caused by qi-stagnancy and blood stasis. It is the first report for the simultaneous determination of 12 active components in YZP. Objective: A newly, simple, accurate and reliable method for the separation and determination of 12 active components (protopine, α-allocryptopine, coptisine, xanthotol, palmatine, dehydrocorydaline, glaucine, tetrahydropalmatine, tetrahydroberberine, imperatorin, corydaline, isoimperatorin) in YZP was developed and validated using HPLC-PAD. Materials and Methods: The analytes were performed on a Phenomenex Luna-C18 (2) column (250×4.6 mm, 5.0 μm) with a gradient elution program using a mixture of acetonitrile and 0.1% phosphoric acid water solution (adjusted with triethylamine to pH 5.6) as mobile phase. Analytes were performed at 30°C with a flow rate of 1.0 mL/min. Results: The validated method was applied to analyze four major dosage forms of YZP coming from different manufacturers with good linearity (r2, 0.9981~0.9999), precision (RSD, 0.24~2.89%), repeatability (RSD, 0.15~3.34%), stability (RSD, 0.14~3.35%), recovery (91.13~110.81%) of the 12 components. Conclusion: The proposed method enables the separation and determination of 12 active components in a single run for the quality control of YZP. PMID:25709212
Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y; Glascoe, L
The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, George W
1998-12-11
A modified analytical system was assembled and calibrated, in preparation for a second run with cesium (Cs)-promoted "zinc chromite" catalyst. A new column for the on-line gas chromatography (GC) was purchased for the analysis of various light olefin and paraffin isomers. A run was carried out in the continuous stirred autoclave using the Cs-promoted catalyst. Decahydronaphfialene was used as the slurry liquid. Reaction conditions were 375°C, 2000 psig total pressure, 0.5 H₂/CO ratio, and 5000 sL/Kg (cat.)-hr. Analysis of the data from this run is in progress. A manuscript on the thermal stability of potential slurry liquids was submitted tomore » 'Industrial and Engineering Chemistry Research,' and a paper was presented at the 1997 Spring National Meeting of the American Institute of Chemical Engineers, Houston, Texas.« less
Surface Analysis of Nerve Agent Degradation Products by ...
Report This sampling and analytical procedure was developed and applied by a single laboratory to investigate nerve agent degradation products, which may persist at a contaminated site, via surface wiping followed by analytical characterization. The performance data presented demonstrate the fitness-for-purpose regarding surface analysis in that single laboratory. Surfaces (laminate, glass, galvanized steel, vinyl tile, painted drywall and treated wood) were wiped with cotton gauze wipes, sonicated, extracted with distilled water, and filtered. Samples were analyzed with direct injection electrospray ionization liquid chromatography tandem mass spectrometry (ESI-LC/MS/MS) without derivatization. Detection limit data were generated for all analytes of interest on a laminate surface. Accuracy and precision data were generated from each surface fortified with these analytes.
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Method for compression of data using single pass LZSS and run-length encoding
Berlin, G.J.
1994-01-01
A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.
Method for compression of data using single pass LZSS and run-length encoding
Berlin, Gary J.
1997-01-01
A method used preferably with LZSS-based compression methods for compressing a stream of digital data. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer.
Asia Pacific Research Initiative for Sustainable Energy Systems 2011 (APRISES11)
2017-09-29
created during a single run , highlighting rapid prototyping capabilities. NRL’s overall goal was to evaluate whether 3D printed metallic bipolar plates...varying the air flow to evaluate the effect on peak power. These runs are displayed in Figure 2.1.17. The reactants were connected in co-flow with the...way valve allows the operator to either run the gas through a humidifier (PermaPure Model FCl 25-240-7) or a bypass loop. On the humidifier side of
Running of the spectrum of cosmological perturbations in string gas cosmology
NASA Astrophysics Data System (ADS)
Brandenberger, Robert; Franzmann, Guilherme; Liang, Qiuyue
2017-12-01
We compute the running of the spectrum of cosmological perturbations in string gas cosmology, making use of a smooth parametrization of the transition between the early Hagedorn phase and the later radiation phase. We find that the running has the same sign as in simple models of single scalar field inflation. Its magnitude is proportional to (1 -ns) (ns being the slope index of the spectrum), and it is thus parametrically larger than for inflationary cosmology, where it is proportional to (1 -ns)2 .
Rapid Large Earthquake and Run-up Characterization in Quasi Real Time
NASA Astrophysics Data System (ADS)
Bravo, F. J.; Riquelme, S.; Koch, P.; Cararo, S.
2017-12-01
Several test in quasi real time have been conducted by the rapid response group at CSN (National Seismological Center) to characterize earthquakes in Real Time. These methods are known for its robustness and realibility to create Finite Fault Models. The W-phase FFM Inversion, The Wavelet Domain FFM and The Body Wave and FFM have been implemented in real time at CSN, all these algorithms are running automatically and triggered by the W-phase Point Source Inversion. Dimensions (Large and Width ) are predefined by adopting scaling laws for earthquakes in subduction zones. We tested the last four major earthquakes occurred in Chile using this scheme: The 2010 Mw 8.8 Maule Earthquake, The 2014 Mw 8.2 Iquique Earthquake, The 2015 Mw 8.3 Illapel Earthquake and The 7.6 Melinka Earthquake. We obtain many solutions as time elapses, for each one of those we calculate the run-up using an analytical formula. Our results are in agreements with some FFM already accepted by the sicentific comunnity aswell as run-up observations in the field.
NASA Astrophysics Data System (ADS)
Arab, M.; Khodam-Mohammadi, A.
2018-03-01
As a deformed matter bounce scenario with a dark energy component, we propose a deformed one with running vacuum model (RVM) in which the dark energy density ρ _{Λ } is written as a power series of H^2 and \\dot{H} with a constant equation of state parameter, same as the cosmological constant, w=-1. Our results in analytical and numerical point of views show that in some cases same as Λ CDM bounce scenario, although the spectral index may achieve a good consistency with observations, a positive value of running of spectral index (α _s) is obtained which is not compatible with inflationary paradigm where it predicts a small negative value for α _s. However, by extending the power series up to H^4, ρ _{Λ }=n_0+n_2 H^2+n_4 H^4, and estimating a set of consistent parameters, we obtain the spectral index n_s, a small negative value of running α _s and tensor to scalar ratio r, which these reveal a degeneracy between deformed matter bounce scenario with RVM-DE and inflationary cosmology.
Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.
2012-01-01
Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186
Marty, Michael T; Sloan, Courtney D Kuhnline; Bailey, Ryan C; Sligar, Stephen G
2012-07-03
Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes, and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics.
Uludağ, Yildiz; Piletsky, Sergey A; Turner, Anthony P F; Cooper, Matthew A
2007-11-01
Biomimetic recognition elements employed for the detection of analytes are commonly based on proteinaceous affibodies, immunoglobulins, single-chain and single-domain antibody fragments or aptamers. The alternative supra-molecular approach using a molecularly imprinted polymer now has proven utility in numerous applications ranging from liquid chromatography to bioassays. Despite inherent advantages compared with biochemical/biological recognition (which include robustness, storage endurance and lower costs) there are few contributions that describe quantitative analytical applications of molecularly imprinted polymers for relevant small molecular mass compounds in real-world samples. There is, however, significant literature describing the use of low-power, portable piezoelectric transducers to detect analytes in environmental monitoring and other application areas. Here we review the combination of molecularly imprinted polymers as recognition elements with piezoelectric biosensors for quantitative detection of small molecules. Analytes are classified by type and sample matrix presentation and various molecularly imprinted polymer synthetic fabrication strategies are also reviewed.
Analytic theory of alternate multilayer gratings operating in single-order regime.
Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan
2017-07-10
Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.
A Dual-Loop Opto-Electronic Oscillator
NASA Astrophysics Data System (ADS)
Yao, X. S.; Maleki, L.; Ji, Y.; Lutes, G.; Tu, M.
1998-07-01
We describe and demonstrate a multiloop technique for single-mode selection in an opto-electronic oscillator (OEO). We present experimental results of a dual-loop OEO free running at 10 GHz that has the lowest phase noise (-140 dBc/Hz at 10 kHz from the carrier) of all free-running room-temperature oscillators to date.
Opportunities for bead-based multiplex assays in veterinary diagnostic laboratories
USDA-ARS?s Scientific Manuscript database
Bead based multiplex assays (BBMA) also referred to as Luminex, MultiAnalyte Profiling or cytometric bead array (CBA) assays, are applicable for high throughput, simultaneous detection of multiple analytes in solution (from several, up to 50-500 analytes within a single, small sample volume). Curren...
Sedimentary Geothermal Feasibility Study: October 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad; Zerpa, Luis
The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less
NASA Astrophysics Data System (ADS)
Penin, A. A.; Pivovarov, A. A.
2001-02-01
We present an analytical description of top-antitop pair production near the threshold in $e^+e^-$ annihilation and $\\g\\g$ collisions. A set of basic observables considered includes the total cross sections, forward-backward asymmetry and top quark polarization. The threshold effects relevant for the basic observables are described by three universal functions related to S wave production, P wave production and S-P interference. These functions are computed analytically up to the next-to-next-to-leading order of NRQCD. The total $e^+e^-\\to t\\bar t$ cross section near the threshold is obtained in the next-to-next-to-leading order in the closed form including the contribution due to the axial coupling of top quark and mediated by the Z-boson. The effects of the running of the strong coupling constant and of the finite top quark width are taken into account analytically for the P wave production and S-P wave interference.
ERIC Educational Resources Information Center
Stahelin, Nicolas
2017-01-01
In this case study of an environmental education (EE) program run in public schools of Rio de Janeiro, I use a constructivist spatial analytic to interrogate notions of space, place, and territory in critical EE practices. I examine the connections between socioenvironmental relations, counter-hegemonic political activity, and education by delving…
Preparing Tutors to Hit the Ground Running: Lessons from New Tutors' Experiences
ERIC Educational Resources Information Center
Calma, Angelito
2013-01-01
Tutor development is an essential part of academic staff development, yet is comparatively under-researched. This article examines what tutors value as most and least important in a program. Using data from more than 300 participants in three years, and using the dimensions or worth, merit and success as an analytical framework, the article…
Production of Computer Animated Movies for Educational Purposes.
ERIC Educational Resources Information Center
Elberg, H. H.
A detailed account is given in this paper of the procedures and the equipment used in producing six computer-animated instructional movies. First, the sequence of events were described in a script, which, together with the analytical expressions that were dealt with, formed the basis of a program. Then, the program was run on a computer and the…
Daniels, Vijay John; Harley, Dwight
2017-07-01
Although previous research has compared checklists to rating scales for assessing communication, the purpose of this study was to compare the effect on reliability and sensitivity to level of training of an analytic, a holistic, and a combined analytic-holistic rating scale in assessing communication skills. The University of Alberta Internal Medicine Residency runs OSCEs for postgraduate year (PGY) 1 and 2 residents and another for PGY-4 residents. Communication stations were scored with an analytic scale (empathy, non-verbal skills, verbal skills, and coherence subscales) and a holistic scale. Authors analyzed reliability of individual and combined scales using generalizability theory and evaluated each scale's sensitivity to level of training. For analytic, holistic, and combined scales, 12, 12, and 11 stations respectively yielded a Phi of 0.8 for the PGY-1,2 cohort, and 16, 16, and 14 stations yielded a Phi of 0.8 for the PGY-4 cohort. PGY-4 residents scored higher on the combined scale, the analytic rating scale, and the non-verbal and coherence subscales. A combined analytic-holistic rating scale increased score reliability and was sensitive to level of training. Given increased validity evidence, OSCE developers should consider combining analytic and holistic scales when assessing communication skills. Copyright © 2017 Elsevier B.V. All rights reserved.
Using Modules with MPICH-G2 (and "Loose Ends")
NASA Technical Reports Server (NTRS)
Chang, Johnny; Thigpen, William W. (Technical Monitor)
2002-01-01
A new approach to running complex, distributed MPI jobs using the MPICH-G2 library is described. This approach allows the user to switch between different versions of compilers, system libraries, MPI libraries, etc. via the "module" command. The key idea is a departure from the prescribed "(jobtype=mpi)" approach to running distributed MPI jobs. The new method requires the user to provide a script that will be run as the "executable" with the "(jobtype=single)" RSL attribute. The major advantage of the proposed method is to enable users to decide in their own script what modules, environment, etc. they would like to have in running their job.
Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav
2015-03-06
The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.
Initialization and Simulation of Three-Dimensional Aircraft Wake Vortices
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Zheng, Z. C.
1997-01-01
This paper studies the effects of axial velocity profiles on vortex decay, in order to properly initialize and simulate three-dimensional wake vortex flow. Analytical relationships are obtained based on a single vortex model and computational simulations are performed for a rather practical vortex wake, which show that the single vortex analytical relations can still be applicable at certain streamwise sections of three-dimensional wake vortices.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-05-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-01-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480
An Introduction to MAMA (Meta-Analysis of MicroArray data) System.
Zhang, Zhe; Fenstermacher, David
2005-01-01
Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
NASA Astrophysics Data System (ADS)
Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal
2005-11-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.
Infrared Spectroscopy as a Chemical Fingerprinting Tool
NASA Technical Reports Server (NTRS)
Huff, Tim; Munafo, Paul M. (Technical Monitor)
2002-01-01
Infrared (IR) spectroscopy is a powerful analytical tool in the chemical fingerprinting of materials. The technique is rapid, reproducible and usually non-invasive. With the appropriate accessories, the technique can be used to examine samples in either a solid, liquid or gas phase. Solid samples of varying sizes and shapes may be used, and with the addition of microscopic IR (microspectroscopy) capabilities, minute materials such as single fibers and threads may be examined. With the addition of appropriate software, microspectroscopy can be used for automated discrete point or compositional surface area mapping, with the latter providing a means to record changes in the chemical composition of a material surface over a defined area. Both aqueous and non-aqueous free-flowing solutions can be analyzed using appropriate IR techniques, as can viscous liquids such as heavy oils and greases. Due to the ability to characterize gaseous samples, IR spectroscopy can also be coupled with thermal processes such as thermogravimetric (TG) analyses to provide both thermal and chemical data in a single run. In this configuration, solids (or liquids) heated in a TG analyzer undergo decomposition, with the evolving gases directed into the IR spectrometer. Thus, information is provided on the thermal properties of a material and the order in which its chemical constituents are broken down during incremental heating. Specific examples of these varied applications will be cited, with data interpretation and method limitations further discussed.
Use of XML and Java for collaborative petroleum reservoir modeling on the Internet
Victorine, J.; Watney, W.L.; Bhattacharya, S.
2005-01-01
The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.
GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening
NASA Astrophysics Data System (ADS)
Maureira-Fredes, Cristián; Amaro-Seoane, Pau
2018-01-01
A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.
A Sensitive Branched DNA HIV-1 Signal Amplification Viral Load Assay with Single Day Turnaround
Baumeister, Mark A.; Zhang, Nan; Beas, Hilda; Brooks, Jesse R.; Canchola, Jesse A.; Cosenza, Carlo; Kleshik, Felix; Rampersad, Vinod; Surtihadi, Johan; Battersby, Thomas R.
2012-01-01
Branched DNA (bDNA) is a signal amplification technology used in clinical and research laboratories to quantitatively detect nucleic acids. An overnight incubation is a significant drawback of highly sensitive bDNA assays. The VERSANT® HIV-1 RNA 3.0 Assay (bDNA) (“Versant Assay”) currently used in clinical laboratories was modified to allow shorter target incubation, enabling the viral load assay to be run in a single day. To dramatically reduce the target incubation from 16–18 h to 2.5 h, composition of only the “Lysis Diluent” solution was modified. Nucleic acid probes in the assay were unchanged. Performance of the modified assay (assay in development; not commercially available) was evaluated and compared to the Versant Assay. Dilution series replicates (>950 results) were used to demonstrate that analytical sensitivity, linearity, accuracy, and precision for the shorter modified assay are comparable to the Versant Assay. HIV RNA-positive clinical specimens (n = 135) showed no significant difference in quantification between the modified assay and the Versant Assay. Equivalent relative quantification of samples of eight genotypes was demonstrated for the two assays. Elevated levels of several potentially interfering endogenous substances had no effect on quantification or specificity of the modified assay. The modified assay with drastically improved turnaround time demonstrates the viability of signal-amplifying technology, such as bDNA, as an alternative to the PCR-based assays dominating viral load monitoring in clinical laboratories. Highly sensitive bDNA assays with a single day turnaround may be ideal for laboratories with especially stringent cost, contamination, or reliability requirements. PMID:22479381
A sensitive branched DNA HIV-1 signal amplification viral load assay with single day turnaround.
Baumeister, Mark A; Zhang, Nan; Beas, Hilda; Brooks, Jesse R; Canchola, Jesse A; Cosenza, Carlo; Kleshik, Felix; Rampersad, Vinod; Surtihadi, Johan; Battersby, Thomas R
2012-01-01
Branched DNA (bDNA) is a signal amplification technology used in clinical and research laboratories to quantitatively detect nucleic acids. An overnight incubation is a significant drawback of highly sensitive bDNA assays. The VERSANT® HIV-1 RNA 3.0 Assay (bDNA) ("Versant Assay") currently used in clinical laboratories was modified to allow shorter target incubation, enabling the viral load assay to be run in a single day. To dramatically reduce the target incubation from 16-18 h to 2.5 h, composition of only the "Lysis Diluent" solution was modified. Nucleic acid probes in the assay were unchanged. Performance of the modified assay (assay in development; not commercially available) was evaluated and compared to the Versant Assay. Dilution series replicates (>950 results) were used to demonstrate that analytical sensitivity, linearity, accuracy, and precision for the shorter modified assay are comparable to the Versant Assay. HIV RNA-positive clinical specimens (n = 135) showed no significant difference in quantification between the modified assay and the Versant Assay. Equivalent relative quantification of samples of eight genotypes was demonstrated for the two assays. Elevated levels of several potentially interfering endogenous substances had no effect on quantification or specificity of the modified assay. The modified assay with drastically improved turnaround time demonstrates the viability of signal-amplifying technology, such as bDNA, as an alternative to the PCR-based assays dominating viral load monitoring in clinical laboratories. Highly sensitive bDNA assays with a single day turnaround may be ideal for laboratories with especially stringent cost, contamination, or reliability requirements.
A numerical method for shock driven multiphase flow with evaporating particles
NASA Astrophysics Data System (ADS)
Dahal, Jeevan; McFarland, Jacob A.
2017-09-01
A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.
Navarro, María; Kontoudakis, Nikolaos; Canals, Joan Miquel; García-Romero, Esteban; Gómez-Alonso, Sergio; Zamora, Fernando; Hermosín-Gutiérrez, Isidro
2017-07-01
A new method for the analysis of ellagitannins observed in oak-aged wine is proposed, exhibiting interesting advantages with regard to previously reported analytical methods. The necessary extraction of ellagitannins from wine was simplified to a single step of solid phase extraction (SPE) using size exclusion chromatography with Sephadex LH-20 without the need for any previous SPE of phenolic compounds using reversed-phase materials. The quantitative recovery of wine ellagitannins requires a combined elution with methanol and ethyl acetate, especially for increasing the recovery of the less polar acutissimins. The chromatographic method was performed using a fused-core C18 column, thereby avoiding the coelution of main ellagitannins, such as vescalagin and roburin E. However, the very polar ellagitannins, namely, the roburins A, B and C, still partially coeluted, and their quantification was assisted by the MS detector. This methodology also enabled the analysis of free gallic and ellagic acids in the same chromatographic run. Copyright © 2017 Elsevier Ltd. All rights reserved.
SchemaOnRead: A Package for Schema-on-Read in R
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less
Internal quality control: best practice.
Kinns, Helen; Pitkin, Sarah; Housley, David; Freedman, Danielle B
2013-12-01
There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.
Single-Gender Schools Scrutinized
ERIC Educational Resources Information Center
Zubrzycki, Jaclyn
2012-01-01
This article reports on a study on publicly run schools in the Republic of Trinidad and Tobago which has found that, while single-sex schools may benefit female students who prefer a single-sex environment, they are not inherently beneficial for boys or most girls. While the findings are based on data from one Caribbean nation, experts say they…
Advanced ETC/LSS computerized analytical models, CO2 concentration. Volume 1: Summary document
NASA Technical Reports Server (NTRS)
Taylor, B. N.; Loscutoff, A. V.
1972-01-01
Computer simulations have been prepared for the concepts of C02 concentration which have the potential for maintaining a C02 partial pressure of 3.0 mmHg, or less, in a spacecraft environment. The simulations were performed using the G-189A Generalized Environmental Control computer program. In preparing the simulations, new subroutines to model the principal functional components for each concept were prepared and integrated into the existing program. Sample problems were run to demonstrate the methods of simulation and performance characteristics of the individual concepts. Comparison runs for each concept can be made for parametric values of cabin pressure, crew size, cabin air dry and wet bulb temperatures, and mission duration.
Energy thresholds of discrete breathers in thermal equilibrium and relaxation processes.
Ming, Yi; Ling, Dong-Bo; Li, Hui-Min; Ding, Ze-Jun
2017-06-01
So far, only the energy thresholds of single discrete breathers in nonlinear Hamiltonian systems have been analytically obtained. In this work, the energy thresholds of discrete breathers in thermal equilibrium and the energy thresholds of long-lived discrete breathers which can remain after a long time relaxation are analytically estimated for nonlinear chains. These energy thresholds are size dependent. The energy thresholds of discrete breathers in thermal equilibrium are the same as the previous analytical results for single discrete breathers. The energy thresholds of long-lived discrete breathers in relaxation processes are different from the previous results for single discrete breathers but agree well with the published numerical results known to us. Because real systems are either in thermal equilibrium or in relaxation processes, the obtained results could be important for experimental detection of discrete breathers.
Process-Hardened, Multi-Analyte Sensor for Characterizing Rocket Plume Constituents
NASA Technical Reports Server (NTRS)
Goswami, Kisholoy
2011-01-01
A multi-analyte sensor was developed that enables simultaneous detection of rocket engine combustion-product molecules in a launch-vehicle ground test stand. The sensor was developed using a pin-printing method by incorporating multiple sensor elements on a single chip. It demonstrated accurate and sensitive detection of analytes such as carbon dioxide, carbon monoxide, kerosene, isopropanol, and ethylene from a single measurement. The use of pin-printing technology enables high-volume fabrication of the sensor chip, which will ultimately eliminate the need for individual sensor calibration since many identical sensors are made in one batch. Tests were performed using a single-sensor chip attached to a fiber-optic bundle. The use of a fiber bundle allows placement of the opto-electronic readout device at a place remote from the test stand. The sensors are rugged for operation in harsh environments.
Dong, Shuqing; Gao, Ruibin; Yang, Yan; Guo, Mei; Ni, Jingman; Zhao, Liang
2014-03-15
Although the separation efficiency of capillary electrophoresis (CE) is much higher than that of other chromatographic methods, it is sometimes difficult to adequately separate the complex ingredients in biological samples. This article describes how one effective and simple way to develop the separation efficiency in CE is to add some modifiers to the running buffer. The suitable running buffer modifier β-cyclodextrin (β-CD) was explored to fast and completely separate four phenylethanoid glycosides and aglycones (homovanillyl alcohol, hydroxytyrosol, 3,4-dimethoxycinnamic acid, and caffeic acid) in Lamiophlomis rotata (Lr) and Cistanche by capillary zone electrophoresis with ultraviolet (UV) detection. It was found that when β-CD was used as running buffer modifier, a baseline separation of the four analytes could be accomplished in less than 20 min and the detection limits were as low as 10(-3) mg L(-1). Other factors affecting the CE separation, such as working potential, pH value and ionic strength of running buffer, separation voltage, and sample injection time, were investigated extensively. Under the optimal conditions, a successful practical application on the determination of Lr and Cistanche samples confirmed the validity and practicability of this method. Copyright © 2014 Elsevier Inc. All rights reserved.
Chen, Fangfang; Gong, Zhiyuan; Kelly, Barry C
2015-02-27
A sensitive analytical method based on liquid-liquid extraction (LLE) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was developed for rapid analysis of 11 pharmaceuticals and personal care products (PPCPs) in fish plasma micro-aliquots (∼20μL). Target PPCPs included, bisphenol A, carbamazepine, diclofenac, fluoxetine, gemfibrozil, ibuprofen, naproxen, risperidone, sertraline, simvastatin and triclosan. A relatively quicker and cheaper LLE procedure exhibited comparable analyte recoveries with solid-phase extraction. Rapid separation and analysis of target compounds in fish plasma extracts was achieved by employing a high efficiency C-18 HPLC column (Agilent Poroshell 120 SB-C18, 2.1mm×50mm, 2.7μm) and fast polarity switching, enabling effective monitoring of positive and negative ions in a single 9min run. With the exception of bisphenol A, which exhibited relatively high background contamination, method detection limits of individual PPCPs ranged between 0.15 and 0.69pg/μL, while method quantification limits were between 0.05 and 2.3pg/μL. Mean matrix effect (ME) values ranged between 65 and 156% for the various target analytes. Isotope dilution quantification using isotopically labelled internal surrogates was utilized to correct for signal suppression or enhancement and analyte losses during sample preparation. The method was evaluated by analysis of 20μL plasma micro-aliquots collected from zebrafish (Danio rerio) from a laboratory bioaccumulation study, which included control group fish (no exposure), as well as fish exposed to environmentally relevant concentrations of PPCPs. Using the developed LC-MS/MS based method, concentrations of the studied PPCPs were consistently detected in the low pg/μL (ppb) range. The method may be useful for investigations requiring fast, reliable concentration measurements of PPCPs in fish plasma. In particular, the method may be applicable for in situ contaminant biomonitoring, as well as bioaccumulation and toxicology studies employing small fishes with low blood compartment volumes. Copyright © 2015 Elsevier B.V. All rights reserved.
Koren, Lee; Ng, Ella S M; Soma, Kiran K; Wynne-Edwards, Katherine E
2012-01-01
Blood samples from wild mammals and birds are often limited in volume, allowing researchers to quantify only one or two steroids from a single sample by immunoassays. In addition, wildlife serum or plasma samples are often lipemic, necessitating stringent sample preparation. Here, we validated sample preparation for simultaneous liquid chromatography--tandem mass spectrometry (LC-MS/MS) quantitation of cortisol, corticosterone, 11-deoxycortisol, dehydroepiandrosterone (DHEA), 17β-estradiol, progesterone, 17α-hydroxyprogesterone and testosterone from diverse mammalian (7 species) and avian (5 species) samples. Using 100 µL of serum or plasma, we quantified (signal-to-noise (S/N) ratio ≥ 10) 4-7 steroids depending on the species and sample, without derivatization. Steroids were extracted from serum or plasma using automated solid-phase extraction where samples were loaded onto C18 columns, washed with water and hexane, and then eluted with ethyl acetate. Quantitation by LC-MS/MS was done in positive ion, multiple reaction-monitoring (MRM) mode with an atmospheric pressure chemical ionization (APCI) source and heated nebulizer (500°C). Deuterated steroids served as internal standards and run time was 15 minutes. Extraction recoveries were 87-101% for the 8 analytes, and all intra- and inter-run CVs were ≤ 8.25%. This quantitation method yields good recoveries with variable lipid-content samples, avoids antibody cross-reactivity issues, and delivers results for multiple steroids. Thus, this method can enrich datasets by providing simultaneous quantitation of multiple steroids, and allow researchers to reimagine the hypotheses that could be tested with their volume-limited, lipemic, wildlife samples.
Phase Noise Reduction of Laser Diode
NASA Technical Reports Server (NTRS)
Zhang, T. C.; Poizat, J.-Ph.; Grelu, P.; Roch, J.-F.; Grangier, P.; Marin, F.; Bramati, A.; Jost, V.; Levenson, M. D.; Giacobino, E.
1996-01-01
Phase noise of single mode laser diodes, either free-running or using line narrowing technique at room temperature, namely injection-locking, has been investigated. It is shown that free-running diodes exhibit very large excess phase noise, typically more than 80 dB above shot-noise at 10 MHz, which can be significantly reduced by the above-mentioned technique.
SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies
ERIC Educational Resources Information Center
Yurdugul, Halil
2009-01-01
This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…
A single peptide loop in an alpha-Helix
USDA-ARS?s Scientific Manuscript database
Pitch is not a height but a ratio of rise/run. In an alpha-helix, run can be as the radius (r) from the center of the circle, as a diameter (d) measured across/bisecting a circumference, or as a distance (c) along a circumference; rise in each case can corresponds to same height (h) increase. For ...
Exercise-induced muscle damage and running economy in humans.
Assumpção, Cláudio de Oliveira; Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Greco, Camila Coelho; Denadai, Benedito Sérgio
2013-01-01
Running economy (RE), defined as the energy demand for a given velocity of submaximal running, has been identified as a critical factor of overall distance running performance. Plyometric and resistance trainings, performed during a relatively short period of time (~15-30 days), have been successfully used to improve RE in trained athletes. However, these exercise types, particularly when they are unaccustomed activities for the individuals, may cause delayed onset muscle soreness, swelling, and reduced muscle strength. Some studies have demonstrated that exercise-induced muscle damage has a negative impact on endurance running performance. Specifically, the muscular damage induced by an acute bout of downhill running has been shown to reduce RE during subsequent moderate and high-intensity exercise (>65% VO₂max). However, strength exercise (i.e., jumps, isoinertial and isokinetic eccentric exercises) seems to impair RE only for subsequent high-intensity exercise (~90% VO₂max). Finally, a single session of resistance exercise or downhill running (i.e., repeated bout effect) attenuates changes in indirect markers of muscle damage and blunts changes in RE.
Barefoot running: an evaluation of current hypothesis, future research and clinical applications.
Tam, Nicholas; Astephen Wilson, Janie L; Noakes, Timothy D; Tucker, Ross
2014-03-01
Barefoot running has become a popular research topic, driven by the increasing prescription of barefoot running as a means of reducing injury risk. Proponents of barefoot running cite evolutionary theories that long-distance running ability was crucial for human survival, and proof of the benefits of natural running. Subsequently, runners have been advised to run barefoot as a treatment mode for injuries, strength and conditioning. The body of literature examining the mechanical, structural, clinical and performance implications of barefoot running is still in its infancy. Recent research has found significant differences associated with barefoot running relative to shod running, and these differences have been associated with factors that are thought to contribute to injury and performance. Crucially, long-term prospective studies have yet to be conducted and the link between barefoot running and injury or performance remains tenuous and speculative. The injury prevention potential of barefoot running is further complicated by the complexity of injury aetiology, with no single factor having been identified as causative for the most common running injuries. The aim of the present review was to critically evaluate the theory and evidence for barefoot running, drawing on both collected evidence as well as literature that have been used to argue in favour of barefoot running. We describe the factors driving the prescription of barefoot running, examine which of these factors may have merit, what the collected evidence suggests about the suitability of barefoot running for its purported uses and describe the necessary future research to confirm or refute the barefoot running hypotheses.
Temperature dependence of single-event burnout in n-channel power MOSFET's
NASA Astrophysics Data System (ADS)
Johnson, G. H.; Schrimpf, R. D.; Galloway, K. F.; Koga, R.
1994-03-01
The temperature dependence of single-event burnout (SEB) in n-channel power metal-oxide-semiconductor field effect transistors (MOSFET's) is investigated experimentally and analytically. Experimental data are presented which indicate that the SEB susceptibility of the power MOSFET decreases with increasing temperature. A previously reported analytical model that describes the SEB mechanism is updated to include temperature variations. This model is shown to agree with the experimental trends.
Yan, Yifei; Zhang, Lisong; Yan, Xiangzhen
2016-01-01
In this paper, a single-slope tunnel pipeline was analysed considering the effects of vertical earth pressure, horizontal soil pressure, inner pressure, thermal expansion force and pipeline—soil friction. The concept of stagnation point for the pipeline was proposed. Considering the deformation compatibility condition of the pipeline elbow, the push force of anchor blocks of a single-slope tunnel pipeline was derived based on an energy method. Then, the theoretical formula for this force is thus generated. Using the analytical equation, the push force of the anchor block of an X80 large-diameter pipeline from the West—East Gas Transmission Project was determined. Meanwhile, to verify the results of the analytical method, and the finite element method, four categories of finite element codes were introduced to calculate the push force, including CAESARII, ANSYS, AutoPIPE and ALGOR. The results show that the analytical results agree well with the numerical results, and the maximum relative error is only 4.1%. Therefore, the results obtained with the analytical method can satisfy engineering requirements. PMID:26963097
Method for compression of data using single pass LZSS and run-length encoding
Berlin, G.J.
1997-12-23
A method used preferably with LZSS-based compression methods for compressing a stream of digital data is disclosed. The method uses a run-length encoding scheme especially suited for data strings of identical data bytes having large run-lengths, such as data representing scanned images. The method reads an input data stream to determine the length of the data strings. Longer data strings are then encoded in one of two ways depending on the length of the string. For data strings having run-lengths less than 18 bytes, a cleared offset and the actual run-length are written to an output buffer and then a run byte is written to the output buffer. For data strings of 18 bytes or longer, a set offset and an encoded run-length are written to the output buffer and then a run byte is written to the output buffer. The encoded run-length is written in two parts obtained by dividing the run length by a factor of 255. The first of two parts of the encoded run-length is the quotient; the second part is the remainder. Data bytes that are not part of data strings of sufficient length are written directly to the output buffer. 3 figs.
NASA Technical Reports Server (NTRS)
1981-01-01
The modified CG2000 crystal grower construction, installation, and machine check out was completed. The process development check out proceeded with several dry runs and one growth run. Several machine calibrations and functional problems were discovered and corrected. Exhaust gas analysis system alternatives were evaluated and an integrated system approved and ordered. Several growth runs on a development CG2000 RC grower show that complete neck, crown, and body automated growth can be achieved with only one operator input.
Development of Infants' Segmentation of Words from Native Speech: A Meta-Analytic Approach
ERIC Educational Resources Information Center
Bergmann, Christina; Cristia, Alejandrina
2016-01-01
Infants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their…
Cross-Layer Modeling Framework for Energy-Efficient Resilience
2014-04-01
functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2013 CFR
2013-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2011 CFR
2011-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2012 CFR
2012-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2010 CFR
2010-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2014 CFR
2014-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
1997-02-13
AMINOPROPIOPHENONE IN DOG PLASMA 0 OCH 3 CH2 CH3 OCH2CHOHCH 2 OH H2 N& p-Aminopropiophenone Guaifenesin (WR 000,302) Internal Standard APPROVALS: This Analytical...40 - 80 il VOLUME: RUN TIME: 14 min (PAPP: 10.7 min; Guaifenesin (Internal Standard): 8.5 min) DETECTOR Wavelength: 316 nm SETTINGS: Absorption Range
Problem-Based Labs and Group Projects in an Introductory University Physics Course
ERIC Educational Resources Information Center
Kohnle, Antje; Brown, C. Tom A.; Rae, Cameron F.; Sinclair, Bruce D.
2012-01-01
This article describes problem-based labs and analytical and computational project work we have been running at the University of St Andrews in an introductory physics course since 2008/2009. We have found the choice of topics, scaffolding of the process, timing in the year and facilitator guidance decisive for the success of these activities.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shvetsov, N. K., E-mail: elmash@em.ispu.ru
2016-11-15
The results of calculations of the increase in losses in an induction motor with frequency control and different forms of the supply voltage are presented. The calculations were performed by an analytic method based on harmonic analysis of the supply voltage as well as numerical calculation of the electromagnetic processes by the finite-element method.
Level 1 environmental assessment performance evaluation. Final report jun 77-oct 78
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, E.D.; Smith, F.; Wagoner, D.E.
1979-02-01
The report gives results of a two-phased evaluation of Level 1 environmental assessment procedures. Results from Phase I, a field evaluation of the Source Assessment Sampling System (SASS), showed that the SASS train performed well within the desired factor of 3 Level 1 accuracy limit. Three sample runs were made with two SASS trains sampling simultaneously and from approximately the same sampling point in a horizontal duct. A Method-5 train was used to estimate the 'true' particulate loading. The sampling systems were upstream of the control devices to ensure collection of sufficient material for comparison of total particulate, particle sizemore » distribution, organic classes, and trace elements. Phase II consisted of providing each of three organizations with three types of control samples to challenge the spectrum of Level 1 analytical procedures: an artificial sample in methylene chloride, an artificial sample on a flyash matrix, and a real sample composed of the combined XAD-2 resin extracts from all Phase I runs. Phase II results showed that when the Level 1 analytical procedures are carefully applied, data of acceptable accuracy is obtained. Estimates of intralaboratory and interlaboratory precision are made.« less
Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C
2015-03-01
Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.
A comparative study of single-leg ground reaction forces in running lizards.
McElroy, Eric J; Wilson, Robbie; Biknevicius, Audrone R; Reilly, Stephen M
2014-03-01
The role of different limbs in supporting and propelling the body has been studied in many species with animals appearing to have either similarity in limb function or differential limb function. Differential hindlimb versus forelimb function has been proposed as a general feature of running with a sprawling posture and as benefiting sprawled postured animals by enhancing maneuvering and minimizing joint moments. Yet only a few species have been studied and thus the generality of differential limb function in running animals with sprawled postures is unknown. We measured the limb lengths of seven species of lizard and their single-limb three-dimensional ground reaction forces during high-speed running. We found that all species relied on the hindlimb for producing accelerative forces. Braking forces were forelimb dominated in four species and equally distributed between limbs in the other three. Vertical forces were dominated by the hindlimb in three species and equally distributed between the forelimb and hindlimb in the other four. Medial forces were dominated by the hindlimb in four species and equally distributed in the other three, with all Iguanians exhibiting hindlimb-biased medial forces. Relative hindlimb to forelimb length of each species was related to variation in hindlimb versus forelimb medial forces; species with relatively longer hindlimbs compared with forelimbs exhibited medial forces that were more biased towards the hindlimbs. These results suggest that the function of individual limbs in lizards varies across species with only a single general pattern (hindlimb-dominated accelerative force) being present.
NASA Technical Reports Server (NTRS)
Giacobino, E.; Marin, F.; Bramati, A.; Jost, V.; Poizat, J. Ph.; Roch, J.-F.; Grangier, P.; Zhang, T.-C.
1996-01-01
We have investigated the intensity noise of single mode laser diodes, either free-running or using different types of line narrowing techniques at room temperature. We have measured an intensity squeezing of 1.2 dB with grating-extended cavity lasers and 1.4 dB with injection locked lasers (respectively 1.6 dB and 2.3 dB inferred at the laser output). We have observed that the intensity noise of a free-running nominally single mode laser diode results from a cancellation effect between large anti-correlated fluctuations of the main mode and of weak longitudinal side modes. Reducing the side modes by line narrowing techniques results in intensity squeezing.
Open-source meteor detection software for low-cost single-board computers
NASA Astrophysics Data System (ADS)
Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.
2016-01-01
This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.
Choe, Leila H; Lee, Kelvin H
2003-10-01
We investigate one approach to assess the quantitative variability in two-dimensional gel electrophoresis (2-DE) separations based on gel-to-gel variability, sample preparation variability, sample load differences, and the effect of automation on image analysis. We observe that 95% of spots present in three out of four replicate gels exhibit less than a 0.52 coefficient of variation (CV) in fluorescent stain intensity (% volume) for a single sample run on multiple gels. When four parallel sample preparations are performed, this value increases to 0.57. We do not observe any significant change in quantitative value for an increase or decrease in sample load of 30% when using appropriate image analysis variables. Increasing use of automation, while necessary in modern 2-DE experiments, does change the observed level of quantitative and qualitative variability among replicate gels. The number of spots that change qualitatively for a single sample run in parallel varies from a CV = 0.03 for fully manual analysis to CV = 0.20 for a fully automated analysis. We present a systematic method by which a single laboratory can measure gel-to-gel variability using only three gel runs.
NASA Astrophysics Data System (ADS)
Viswanathan, Balakrishnan; Gea-Banacloche, Julio
2017-04-01
We analyze a recent scheme proposed by Xia et al. to induce a conditional phase shift between two single-photon pulses by having them propagate at different speeds through a nonlinear medium with a nonlocal response. We have obtained an analytical solution for the case they considered, which supports their claim that a π phase shift with unit fidelity is possible in principle. We discuss the conditions that have to be met and the challenges and opportunities that this might present to the realization of a single-photon conditional phase gate.
Controlling the angular radiation of single emitters using dielectric patch nanoantennas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yuanqing; Li, Qiang; Qiu, Min, E-mail: minqiu@zju.edu.cn
2015-07-20
Dielectric nanoantennas have generated much interest in recent years owing to their low loss and optically induced electric and magnetic resonances. In this paper, we investigate the coupling between a single emitter and dielectric patch nanoantennas. For the coupled system involving non-spherical structures, analytical Mie theory is no longer applicable. A semi-analytical model is proposed instead to interpret the coupling mechanism and the radiation characteristics of the system. Based on the presented model, we demonstrate that the angular emission of the single emitter can be not only enhanced but also rotated using the dielectric patch nanoantennas.
Sierra/SolidMechanics 4.48 Verification Tests Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose
2018-03-01
Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
Sierra/SolidMechanics 4.48 Verification Tests Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose
Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less
Utility perspective on USEPA analytical methods program redirection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, B.; Davis, M.K.; Krasner, S.W.
1996-11-01
The Metropolitan Water District of Southern California (Metropolitan) is a public, municipal corporation, created by the State of California, which wholesales supplemental water trough 27 member agencies (cities and water districts). Metropolitan serves nearly 16 million people in an area along the coastal plain of Southern California that covers approximately 5200 square miles. Water deliveries have averaged up to 2.5 million acre-feet per year. Metropolitan`s Water Quality Laboratory (WQL) conducts compliance monitoring of its source and finished drinking waters for chemical and microbial constituents. The laboratory maintains certification of a large number and variety of analytical procedures. The WQL operatesmore » in a 17,000-square-foot facility. The equipment is state-of-the-art analytical instrumentation. The staff consists of 40 professional chemists and microbiologists whose experience and expertise are extensive and often highly specialized. The staff turnover is very low, and the laboratory is consistently, efficiently, and expertly run.« less
Implications on 1+1 D runup modeling due to time features of the earthquake source
NASA Astrophysics Data System (ADS)
Fuentes, M.; Riquelme, S.; Campos, J. A.
2017-12-01
The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1+1D solution for the shoreline motion time series, from the static case to the dynamic case, by including both, rise time and rupture velocity. Results show that the static case correspond to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum run-up may be affected by very slow ruptures and long rise time. The analytical solution has been tested for the Nicaraguan tsunami earthquake, suggesting that the rupture was not slow enough to cause wave amplification to explain the high runup observations.
Jin, Hui; Gui, Rijun; Yu, Jianbo; Lv, Wei; Wang, Zonghua
2017-05-15
Previously developed electrochemical biosensors with single-electric signal output are probably affected by intrinsic and extrinsic factors. In contrast, the ratiometric electrochemical biosensors (RECBSs) with dual-electric signal outputs have an intrinsic built-in correction to the effects from system or background electric signals, and therefore exhibit a significant potential to improve the accuracy and sensitivity in electrochemical sensing applications. In this review, we systematically summarize the fabrication strategies, sensing modes and analytical applications of RECBSs. First, the different fabrication strategies of RECBSs were introduced, referring to the analytes-induced single- and dual-dependent electrochemical signal strategies for RECBSs. Second, the different sensing modes of RECBSs were illustrated, such as differential pulse voltammetry, square wave voltammetry, cyclic voltammetry, alternating current voltammetry, electrochemiluminescence, and so forth. Third, the analytical applications of RECBSs were discussed based on the types of target analytes. Finally, the forthcoming development and future prospects in the research field of RECBSs were also highlighted. Copyright © 2017 Elsevier B.V. All rights reserved.
Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.
2017-12-01
We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
Validation of Supersonic Film Cooling Modeling for Liquid Rocket Engine Applications
NASA Technical Reports Server (NTRS)
Morris, Christopher I.; Ruf, Joseph H.
2010-01-01
Topics include: upper stage engine key requirements and design drivers; Calspan "stage 1" results, He slot injection into hypersonic flow (air); test articles for shock generator diagram, slot injector details, and instrumentation positions; test conditions; modeling approach; 2-d grid used for film cooling simulations of test article; heat flux profiles from 2-d flat plate simulations (run #4); heat flux profiles from 2-d backward facing step simulations (run #43); isometric sketch of single coolant nozzle, and x-z grid of half-nozzle domain; comparison of 2-d and 3-d simulations of coolant nozzles (run #45); flowfield properties along coolant nozzle centerline (run #45); comparison of 3-d CFD nozzle flow calculations with experimental data; nozzle exit plane reduced to linear profile for use in 2-d film-cooling simulations (run #45); synthetic Schlieren image of coolant injection region (run #45); axial velocity profiles from 2-d film-cooling simulation (run #45); coolant mass fraction profiles from 2-d film-cooling simulation (run #45); heat flux profiles from 2-d film cooling simulations (run #45); heat flux profiles from 2-d film cooling simulations (runs #47, #45, and #47); 3-d grid used for film cooling simulations of test article; heat flux contours from 3-d film-cooling simulation (run #45); and heat flux profiles from 3-d and 2-d film cooling simulations (runs #44, #46, and #47).
Field performance of timber bridges. 6, Hoffman Run stress-laminated deck bridge
M. A. Ritter; P. D. Hilbrich Lee; G. J. Porter
The Hoffman Run bridge, located just outside Dahoga, Pennsylvania, was constructed in October 1990. The bridge is a simple-span, single-lane, stress-laminated deck superstructure that is approximately 26 ft long and 16 ft wide. It is the second stress-laminated timber bridge to be constructed of hardwood lumber in Pennsylvania. The performance of the bridge was...
Bionic Running for Unilateral Transtibial Military Amputees
2010-01-01
Bellman, R., 2010, “An Active Ankle-Foot Prosthesis With Biomechanical Energy Regeneration”, Transactions of the ASME Journal...Lefeber, D., 2008, “A Biomechanical Transtibial Prosthesis Powered by Pleated Pneumatic Artificial Muscles,” Model Identification and Control, 4, 394- 405. ...Inc., have designed, built, and demonstrated a first of its kind motor powered, single board computer controlled, running prosthesis for military
High-Throughput Incubation and Quantification of Agglutination Assays in a Microfluidic System.
Castro, David; Conchouso, David; Kodzius, Rimantas; Arevalo, Arpys; Foulds, Ian G
2018-06-04
In this paper, we present a two-phase microfluidic system capable of incubating and quantifying microbead-based agglutination assays. The microfluidic system is based on a simple fabrication solution, which requires only laboratory tubing filled with carrier oil, driven by negative pressure using a syringe pump. We provide a user-friendly interface, in which a pipette is used to insert single droplets of a 1.25-µL volume into a system that is continuously running and therefore works entirely on demand without the need for stopping, resetting or washing the system. These assays are incubated by highly efficient passive mixing with a sample-to-answer time of 2.5 min, a 5⁻10-fold improvement over traditional agglutination assays. We study system parameters such as channel length, incubation time and flow speed to select optimal assay conditions, using the streptavidin-biotin interaction as a model analyte quantified using optical image processing. We then investigate the effect of changing the concentration of both analyte and microbead concentrations, with a minimum detection limit of 100 ng/mL. The system can be both low- and high-throughput, depending on the rate at which assays are inserted. In our experiments, we were able to easily produce throughputs of 360 assays per hour by simple manual pipetting, which could be increased even further by automation and parallelization. Agglutination assays are a versatile tool, capable of detecting an ever-growing catalog of infectious diseases, proteins and metabolites. A system such as this one is a step towards being able to produce high-throughput microfluidic diagnostic solutions with widespread adoption. The development of analytical techniques in the microfluidic format, such as the one presented in this work, is an important step in being able to continuously monitor the performance and microfluidic outputs of organ-on-chip devices.
Petropoulou, Syrago-Styliani E; Duong, Wendy; Petreas, Myrto; Park, June-Soo
2014-08-22
Hydroxylated polybrominated diphenyl ethers (OH-PBDEs) are formed from the oxidative metabolism of polybrominated diphenyl ethers (PBDEs) in humans, rats and mice, but their quantitation in human blood and other matrices with liquid chromatography-mass spectrometric techniques has been a challenge. In this study, a novel analytical method was developed and validated using only 250 μL of human serum for the quantitation of twelve OH-PBDEs, fully chromatographically separated in a 15 min analytical run. This method includes two novel approaches: an enzymatic hydrolysis procedure and a chromatographic separation using a mixed mode chromatography column. The enzymatic hydrolysis (EH) was found critical for 4'-OH-BDE17, which was not detectable without it. For the sample clean up, a solid phase extraction protocol was developed and validated for the extraction of the 12 congeners from human serum. In addition, for the first time baseline resolution of two components was achieved that correspond to a single peak previously identified as 6'-OH-BDE99. The method was validated for linearity, accuracy, precision, matrix effects, limit of quantification, limit of detection, sample stability and overall efficiency. Recoveries (absolute and relative) ranged from 66 to 130% with relative standard deviations <21% for all analytes. Limit of detection and quantitation ranged from 4 to 90 pg mL(-1) and 6-120 pg mL(-1), respectively, with no carry over effects. This method was applied in ten commercially available human serum samples from the general US population. The mean values of the congeners detected in all samples are 4'-OH-BDE17 (34.2 pg mL(-1)), 4-OH-BDE42 (33.9 pg mL(-1)), 5-OH-BDE47 (17.5 pg mL(-1)) and 4'-OH-BDE49 (12.4 pg mL(-1)). Copyright © 2014 Elsevier B.V. All rights reserved.
Anders, Nicole M.; Liu, Jianyong; Wanjiku, Teresia; Giovinazzo, Hugh; Zhou, Jianya; Vaghasia, Ajay; Nelson, William G.; Yegnasubramanian, Srinivasan; Rudek, Michelle A.
2016-01-01
The epigenetic and anti-cancer activities of the nucleoside analog DNA methyltransferase (DNMT) inhibitors decitabine (5-aza-2′-deoxycytidine, DAC), azacitidine, and guadecitabine are thought to require cellular uptake, metabolism to 5-aza-2′-deoxycytidine triphosphate, and incorporation into DNA. This genomic incorporation can then lead to trapping and degradation of DNMT enzymes, and ultimately, passive loss of DNA methylation. To facilitate measurement of critical exposure-response relationships of nucleoside analog DNMT inhibitors, a sensitive and reliable method was developed to simultaneously quantitate 5-aza-2′-deoxycytidine genomic incorporation and genomic 5-methylcytosine content using LC-MS/MS. Genomic DNA was extracted and digested into single nucleosides. Chromatographic separation was achieved with a Thermo Hyperpcarb porous graphite column (100 mm × 2.1 mm, 5μm) and isocratic elution with a 10 mM ammonium acetate:acetonitrile with 0.1% formic acid (70:30, v/v) mobile phase over a 5 minute total analytical run time. An AB Sciex 5500 triple quadrupole mass spectrometer operated in positive electrospray ionization mode was used for the detection of 5-aza-2′-deoxycytidine, 2′-deoxycytidine, and 5-methyl-2′-deoxycytidine. The assay range was 2 – 400 ng/mL for 5-aza-2′-deoxycytidine, 50 – 10,000 ng/mL for 2′-deoxycytidine, and was 5 – 1,000 ng/mL for 5-methyl-2′-deoxycytidine. The assay proved to be accurate (93.0–102.2%) and precise (CV ≤ 6.3%) across all analytes. All analytes exhibited long-term frozen digest matrix stability at −70°C for at least 117 days. The method was applied for the measurement of genomic 5-aza-2′-deoxycytidine and 5-methyl-2′-deoxycytidine content following exposure of in vitro cell culture and in vivo animal models to decitabine. PMID:27082761
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
Faigenbaum, Avery D.; Myer, Gregory D.; Farrell, Anne; Radler, Tracy; Fabiano, Marc; Kang, Jie; Ratamess, Nicholas; Khoury, Jane; Hewett, Timothy E
2014-01-01
Context: Integrative neuromuscular training (INT) has successfully enhanced physical fitness and reduced abnormal biomechanics, which appear to decrease injury rates in adolescent female athletes. If not addressed at the proper time, low levels of physical fitness and abnormal mechanics may predispose female athletes to an increased risk of musculoskeletal injuries. Objectives To evaluate sex-specific effects of INT on selected measures of health- and skill-related fitness in children during physical education (PE). Design: Cohort study. Setting: Public primary school. Patients or Other Participants: Forty children (16 boys, 24 girls; age = 7.6 ± 0.3 years, height = 124.5 ± 6.4 cm, mass = 29.5 ± 7.6 kg) from 2 second-grade PE classes. Intervention(s): The classes were randomized into the PE-plus-INT group (10 boys, 11 girls) or the control group (6 boys, 13 girls) that participated in traditional PE. The INT was performed 2 times per week during the first approximately 15 minutes of each PE class and consisted of body weight exercises. Main Outcome Measure(s): Push-up, curl-up, standing long jump, single-legged hop, single-legged balance, sit-and-reach flexibility test, shuttle run, and 0.8-km run. Results: At baseline, the boys demonstrated higher levels of performance in most of the fitness measurements as evidenced by greater performance on the push-up, standing long jump, single-legged hop, shuttle run, and 0.8-km run (P < .05). In the evaluation of the training effects, we found intervention effects in the girls for enhanced INT-induced gains in performance relative to the control group on the curl-up, long jump, single-legged hop, and 0.8-km run (P < .05) after controlling for baseline. Boys did not demonstrate similar adaptations from the INT program (P ≥ .05). Conclusions: These data indicate that INT is an effective and time-efficient addition to PE for enhancing motor skills and promoting physical activity in children. Seven-year-old girls appeared to be more sensitive to the effects of INT than 7-year-old boys. Future research is warranted to confirm these effects in larger cohorts of children. PMID:24490841
Faigenbaum, Avery D; Myer, Gregory D; Farrell, Anne; Radler, Tracy; Fabiano, Marc; Kang, Jie; Ratamess, Nicholas; Khoury, Jane; Hewett, Timothy E
2014-01-01
Integrative neuromuscular training (INT) has successfully enhanced physical fitness and reduced abnormal biomechanics, which appear to decrease injury rates in adolescent female athletes. If not addressed at the proper time, low levels of physical fitness and abnormal mechanics may predispose female athletes to an increased risk of musculoskeletal injuries. To evaluate sex-specific effects of INT on selected measures of health- and skill-related fitness in children during physical education (PE). Cohort study. Public primary school. Forty children (16 boys, 24 girls; age = 7.6 ± 0.3 years, height = 124.5 ± 6.4 cm, mass = 29.5 ± 7.6 kg) from 2 second-grade PE classes. The classes were randomized into the PE-plus-INT group (10 boys, 11 girls) or the control group (6 boys, 13 girls) that participated in traditional PE. The INT was performed 2 times per week during the first approximately 15 minutes of each PE class and consisted of body weight exercises. Push-up, curl-up, standing long jump, single-legged hop, single-legged balance, sit-and-reach flexibility test, shuttle run, and 0.8-km run. At baseline, the boys demonstrated higher levels of performance in most of the fitness measurements as evidenced by greater performance on the push-up, standing long jump, single-legged hop, shuttle run, and 0.8-km run (P < .05). In the evaluation of the training effects, we found intervention effects in the girls for enhanced INT-induced gains in performance relative to the control group on the curl-up, long jump, single-legged hop, and 0.8-km run (P < .05) after controlling for baseline. Boys did not demonstrate similar adaptations from the INT program (P ≥ .05). These data indicate that INT is an effective and time-efficient addition to PE for enhancing motor skills and promoting physical activity in children. Seven-year-old girls appeared to be more sensitive to the effects of INT than 7-year-old boys. Future research is warranted to confirm these effects in larger cohorts of children.
An analytical and experimental study of injection-locked two-port oscillators
NASA Technical Reports Server (NTRS)
Freeman, Jon C.; Downey, Alan N.
1987-01-01
A Ku-band IMPATT oscillator with two distinct output power ports was injection-locked alternately at both ports. The transmission locking bandwidth was nearly the same for either port. The lower free running power port had a reflection locking bandwidth that was narrower than its transmission locking one. Just the opposite was found at the other port. A detailed analytical model for two-port injection-locked oscillators is presented, and its results agree quite well with the experiments. A critique of the literature on this topic is included to clear up misconceptions and errors. It is concluded that two-port injection-locked oscillators may prove useful in certain communication systems.
NASA Technical Reports Server (NTRS)
Chato, J. C.; Shitzer, A.
1971-01-01
An analytical method was developed to estimate the amount of heat extracted from an artery running close to the skin surface which is cooled in a symmetrical fashion by a cooling strip. The results indicate that the optimum width of a cooling strip is approximately three times the depth to the centerline of the artery. The heat extracted from an artery with such a strip is about 0.9 w/m-C which is too small to affect significantly the temperature of the blood flow through a main blood vessel, such as the carotid artery. The method is applicable to veins as well.
Development and testing of a new system for assessing wheel-running behaviour in rodents.
Chomiak, Taylor; Block, Edward W; Brown, Andrew R; Teskey, G Campbell; Hu, Bin
2016-05-05
Wheel running is one of the most widely studied behaviours in laboratory rodents. As a result, improved approaches for the objective monitoring and gathering of more detailed information is increasingly becoming important for evaluating rodent wheel-running behaviour. Here our aim was to develop a new quantitative wheel-running system that can be used for most typical wheel-running experimental protocols. Here we devise a system that can provide a continuous waveform amenable to real-time integration with a high-speed video ideal for wheel-running experimental protocols. While quantification of wheel running behaviour has typically focused on the number of revolutions per unit time as an end point measure, the approach described here allows for more detailed information like wheel rotation fluidity, directionality, instantaneous velocity, and acceleration, in addition to total number of rotations, and the temporal pattern of wheel-running behaviour to be derived from a single trace. We further tested this system with a running-wheel behavioural paradigm that can be used for investigating the neuronal mechanisms of procedural learning and postural stability, and discuss other potentially useful applications. This system and its ability to evaluate multiple wheel-running parameters may become a useful tool for screening new potentially important therapeutic compounds related to many neurological conditions.
Single-Cylinder Diesel Engine Tests with Unstabilized Water-in-Fuel Emulsions
DOT National Transportation Integrated Search
1978-08-01
A single-cylinder, four-stroke cycle diesel engine was operated on unstabilized water-in-fuel emulsions. Two prototype devices were used to produce the emulsions on-line with the engine. More than 350 test points were run with baseline diesel fuel an...
Blades Forced Vibration Under Aero-Elastic Excitation Modeled by Van der Pol
NASA Astrophysics Data System (ADS)
Pust, Ladislav; Pesek, Ludek
This paper employs a new analytical approach to model the influence of aerodynamic excitation on the dynamics of a bladed cascade at the flutter state. The flutter is an aero-elastic phenomenon that is linked to the interaction of the flow and the traveling deformation wave in the cascade when only the damping of the cascade changes. As a case study the dynamic properties of the five-blade-bunch excited by the running harmonic external forces and aerodynamic self-excited forces are investigated. This blade-bunch is linked in the shroud by means of the viscous-elastic damping elements. The external running excitation depends on the ratio of stator and rotor blade numbers and corresponds to the real type of excitation in the steam turbine. The aerodynamic self-excited forces are modeled by two types of Van der Pol nonlinear models. The influence of the interaction of both types of self-excitation with the external running excitation is investigated on the response curves.
Viidanoja, Jyrki
2015-09-15
A new, sensitive and selective liquid chromatography-electrospray ionization-tandem mass spectrometric (LC-ESI-MS/MS) method was developed for the analysis of Phospholipids (PLs) in bio-oils and fats. This analysis employs hydrophilic interaction liquid chromatography-scheduled multiple reaction monitoring (HILIC-sMRM) with a ZIC-cHILIC column. Eight PL class selective internal standards (homologs) were used for the semi-quantification of 14 PL classes for the first time. More than 400 scheduled MRMs were used for the measurement of PLs with a run time of 34min. The method's performance was evaluated for vegetable oil, animal fat and algae oil. The averaged within-run precision and between-run precision were ≤10% for all of the PL classes that had a direct homologue as an internal standard. The method accuracy was generally within 80-120% for the tested PL analytes in all three sample matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling of Aerodynamic Force Acting in Tunnel for Analysis of Riding Comfort in a Train
NASA Astrophysics Data System (ADS)
Kikko, Satoshi; Tanifuji, Katsuya; Sakanoue, Kei; Nanba, Kouichiro
In this paper, we aimed to model the aerodynamic force that acts on a train running at high speed in a tunnel. An analytical model of the aerodynamic force is developed from pressure data measured on car-body sides of a test train running at the maximum revenue operation speed. The simulation of an 8-car train running while being subjected to the modeled aerodynamic force gives the following results. The simulated car-body vibration corresponds to the actual vibration both qualitatively and quantitatively for the cars at the rear of the train. The separation of the airflow at the tail-end of the train increases the yawing vibration of the tail-end car while it has little effect on the car-body vibration of the adjoining car. Also, the effect of the moving velocity of the aerodynamic force on the car-body vibration is clarified that the simulation under the assumption of a stationary aerodynamic force can markedly increase the car-body vibration.
NASA Astrophysics Data System (ADS)
Jin, Zhe-Yan; Dong, Qiao-Tian; Yang, Zhi-Gang
2015-02-01
The present study experimentally investigated the effect of a simulated single-horn glaze ice accreted on rotor blades on the vortex structures in the wake of a horizontal axis wind turbine by using the stereoscopic particle image velocimetry (Stereo-PIV) technique. During the experiments, four horizontal axis wind turbine models were tested, and both "free-run" and "phase-locked" Stereo-PIV measurements were carried out. Based on the "free-run" measurements, it was found that because of the simulated single-horn glaze ice, the shape, vorticity, and trajectory of tip vortices were changed significantly, and less kinetic energy of the airflow could be harvested by the wind turbine. In addition, the "phase-locked" results indicated that the presence of simulated single-horn glaze ice resulted in a dramatic reduction of the vorticity peak of the tip vortices. Moreover, as the length of the glaze ice increased, both root and tip vortex gaps were found to increase accordingly.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
A RECONNECTION-DRIVEN MODEL OF THE HARD X-RAY LOOP-TOP SOURCE FROM FLARE 2004 FEBRUARY 26
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longcope, Dana; Qiu, Jiong; Brewer, Jasmine
A compact X-class flare on 2004 February 26 showed a concentrated source of hard X-rays at the tops of the flare’s loops. This was analyzed in previous work and interpreted as plasma heated and compressed by slow magnetosonic shocks (SMSs) generated during post-reconnection retraction of the flux. That work used analytic expressions from a thin flux tube (TFT) model, which neglected many potentially important factors such as thermal conduction and chromospheric evaporation. Here we use a numerical solution of the TFT equations to produce a more comprehensive and accurate model of the same flare, including those effects previously omitted. Thesemore » simulations corroborate the prior hypothesis that slow-mode shocks persist well after the retraction has ended, thus producing a compact, loop-top source instead of an elongated jet, as steady reconnection models predict. Thermal conduction leads to densities higher than analytic estimates had predicted, and evaporation enhances the density still higher, but at lower temperatures. X-ray light curves and spectra are synthesized by convolving the results from a single TFT simulation with the rate at which flux is reconnected, as measured through motion of flare ribbons, for example. These agree well with light curves observed by RHESSI and GOES and spectra from RHESSI . An image created from a superposition of TFT model runs resembles one produced from RHESSI observations. This suggests that the HXR loop-top source, at least the one observed in this flare, could be the result of SMSs produced in fast reconnection models like Petschek’s.« less
Advances in multiplexed MRM-based protein biomarker quantitation toward clinical utility.
Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Hardie, Darryl B; Borchers, Christoph H
2014-05-01
Accurate and rapid protein quantitation is essential for screening biomarkers for disease stratification and monitoring, and to validate the hundreds of putative markers in human biofluids, including blood plasma. An analytical method that utilizes stable isotope-labeled standard (SIS) peptides and selected/multiple reaction monitoring-mass spectrometry (SRM/MRM-MS) has emerged as a promising technique for determining protein concentrations. This targeted approach has analytical merit, but its true potential (in terms of sensitivity and multiplexing) has yet to be realized. Described herein is a method that extends the multiplexing ability of the MRM method to enable the quantitation 142 high-to-moderate abundance proteins (from 31mg/mL to 44ng/mL) in undepleted and non-enriched human plasma in a single run. The proteins have been reported to be associated to a wide variety of non-communicable diseases (NCDs), from cardiovascular disease (CVD) to diabetes. The concentrations of these proteins in human plasma are inferred from interference-free peptides functioning as molecular surrogates (2 peptides per protein, on average). A revised data analysis strategy, involving the linear regression equation of normal control plasma, has been instituted to enable the facile application to patient samples, as demonstrated in separate nutrigenomics and CVD studies. The exceptional robustness of the LC/MS platform and the quantitative method, as well as its high throughput, makes the assay suitable for application to patient samples for the verification of a condensed or complete protein panel. This article is part of a Special Issue entitled: Biomarkers: A Proteomic Challenge. © 2013.
On the use of a physically-based baseflow timescale in land surface models.
NASA Astrophysics Data System (ADS)
Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.
2017-12-01
Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.
Botzanowski, Thomas; Erb, Stéphane; Hernandez-Alba, Oscar; Ehkirch, Anthony; Colas, Olivier; Wagner-Rousset, Elsa; Rabuka, David; Beck, Alain; Drake, Penelope M.; Cianférani, Sarah
2017-01-01
ABSTRACT Antibody-drug conjugates (ADCs) have emerged as a family of compounds with promise as efficient immunotherapies. First-generation ADCs were generated mostly via reactions on either lysine side-chain amines or cysteine thiol groups after reduction of the interchain disulfide bonds, resulting in heterogeneous populations with a variable number of drug loads per antibody. To control the position and the number of drug loads, new conjugation strategies aiming at the generation of more homogeneous site-specific conjugates have been developed. We report here the first multi-level characterization of a site-specific ADC by state-of-the-art mass spectrometry (MS) methods, including native MS and its hyphenation to ion mobility (IM-MS). We demonstrate the versatility of native MS methodologies for site-specific ADC analysis, with the unique ability to provide several critical quality attributes within one single run, along with a direct snapshot of ADC homogeneity/heterogeneity without extensive data interpretation. The capabilities of native IM-MS to directly access site-specific ADC conformational information are also highlighted. Finally, the potential of these techniques for assessing an ADC's heterogeneity/homogeneity is illustrated by comparing the analytical characterization of a site-specific DAR4 ADC to that of first-generation ADCs. Altogether, our results highlight the compatibility, versatility, and benefits of native MS approaches for the analytical characterization of all types of ADCs, including site-specific conjugates. Thus, we envision integrating native MS and IM-MS approaches, even in their latest state-of-the-art forms, into workflows that benchmark bioconjugation strategies. PMID:28406343
Peer, Cody J; Spencer, Shawn D; VanDenBerg, Dustin A H; Pacanowski, Michael A; Horenstein, Richard B; Figg, William D
2012-01-01
A sensitive, selective, and rapid ultra-high performance liquid chromatography-tandem mass spectrometry (uHPLC-MS/MS) was developed for the simultaneous quantification of clopidogrel (Plavix(®)) and its derivatized active metabolite (CAMD) in human plasma. Derivatization of the active metabolite in blood with 2-bromo-3'-methoxy acetophenone (MPB) immediately after collection ensured metabolite stability during sample handling and storage. Following addition of ticlopidine as an internal standard and simple protein precipitation, the analytes were separated on a Waters Acquity UPLC™ sub-2 μm-C(18) column via gradient elution before detection on a triple-quadrupole MS with multiple-reaction-monitoring via electrospray ionization. The method was validated across the clinically relevant concentration range of 0.01-50 ng/mL for parent clopidogrel and 0.1-150 ng/mL (r(2)=0.99) for CAMD, with a fast run time of 1.5 min to support pharmacokinetic studies using 75, 150, or 300 mg oral doses of clopidogrel. The analytical method measured concentrations of clopidogrel and CAMD with accuracy (%DEV) <±12% and precision (%CV) of <±6%. The method was successfully applied to measure the plasma concentrations of clopidogrel and CAMD in three subjects administered single oral doses of 75, 150, and 300 mg clopidogrel. It was further demonstrated that the derivatizing agent (MPB) does not affect clopidogrel levels, thus from one aliquot of blood drawn clinically, this method can simultaneously quantify both clopidogrel and CAMD with sensitivity in the picogram per mL range. Published by Elsevier B.V.
Peer, Cody J.; Spencer, Shawn D.; VanDenBerg, Dustin A. H.; Pacanowski, Michael A.; Horenstein, Richard B.; Figg, William D.
2011-01-01
A sensitive, selective, and rapid ultra-high performance liquid chromatography-tandem mass spectrometry (uHPLC-MS/MS) was developed for the simultaneous quantification of clopidogrel (Plavix®) and its derivatized active metabolite (CAMD) in human plasma. Derivatization of the active metabolite in blood with 2-bromo-3’-methoxy acetophenone (MPB) immediately after collection ensured metabolite stability during sample handling and storage. Following addition of ticlopidine as an internal standard and simple protein precipitation, the analytes were separated on a Waters Acquity UPLC™ sub-2µm-C18 column via gradient elution before detection on a triple-quadrupole MS with multiple-reaction-monitoring via electrospray ionization. The method was validated across the clinically-relevant concentration range of 0.01–50 ng/mL for parent clopidogrel and 0.1–150 ng/mL (r2= 0.99) for CAMD, with a fast run time of 1.5 min to support pharmacokinetic studies using 75, 150, or 300 mg oral doses of clopidogrel. The analytical method measured concentrations of clopidogrel and CAMD with accuracy (%DEV) < ±12% and precision (%CV) of < ±6%. The method was successfully applied to measure the plasma concentrations of clopidogrel and CAMD in three subjects administered single oral doses of 75, 150, and 300 mg clopidogrel. It was further demonstrated that the derivatizing agent (MPB) does not affect clopidogrel levels, thus from one aliquot of blood drawn clinically, this method can simultaneously quantify both clopidogrel and CAMD with sensitivity in the picogram per mL range. PMID:22169056
SSME single crystal turbine blade dynamics
NASA Technical Reports Server (NTRS)
Moss, Larry A.; Smith, Todd E.
1987-01-01
A study was performed to determine the dynamic characteristics of the Space Shuttle main engine high pressure fuel turbopump (HPFTP) blades made of single crystal (SC) material. The first and second stage drive turbine blades of HPFTP were examined. The nonrotating natural frequencies were determined experimentally and analytically. The experimental results of the SC second stage blade were used to verify the analytical procedures. The analytical study examined the SC first stage blade natural frequencies with respect to crystal orientation at typical operating conditions. The SC blade dynamic response was predicted to be less than the directionally solidified blade. Crystal axis orientation optimization indicated the third mode interference will exist in any SC orientation.
Liquid filtration properties in gravel foundation of railroad tracks
NASA Astrophysics Data System (ADS)
Strelkov, A.; Teplykh, S.; Bukhman, N.
2016-08-01
Railway bed gravel foundation has a constant permanent impact on urban ecology and ground surface. It is only natural that larger objects, such as railway stations, make broader impact. Surface run-off waters polluted by harmful substances existing in railroad track body (ballast section) flow along railroad tracks and within macadam, go down into subterranean ground flow and then enter neighbouring rivers and water basins. This paper presents analytic calculations and characteristics of surface run-off liquid filtration which flows through gravel multiple layers (railroad track ballast section). The authors analyse liquids with various density and viscosity flowing in multi-layer porous medium. The paper also describes liquid stationary and non-stationary weepage into gravel foundation of railroad tracks.
Analytical solutions for efficient interpretation of single-well push-pull tracer tests
NASA Astrophysics Data System (ADS)
Huang, Junqi; Christ, John A.; Goltz, Mark N.
2010-08-01
Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations describing the governing processes acting on a dissolved compound during a modified push-pull test (advection, longitudinal and transverse dispersion, first-order decay, and rate-limited sorption/partitioning in steady, divergent, and convergent flow fields) is developed. The coupling of this solution with inverse modeling to estimate aquifer parameters provides an efficient methodology for subsurface characterization. Synthetic data for single-well push-pull tests are employed to demonstrate the utility of the solution for determining (1) estimates of aquifer longitudinal and transverse dispersivities, (2) sorption distribution coefficients and rate constants, and (3) non-aqueous phase liquid (NAPL) saturations. Employment of the solution to estimate NAPL saturations based on partitioning and non-partitioning tracers is designed to overcome limitations of previous efforts by including rate-limited mass transfer. This solution provides a new tool for use by practitioners when interpreting single-well push-pull test results.
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
Chebrolu, Kranthi K; Yousef, Gad G; Park, Ryan; Tanimura, Yoshinori; Brown, Allan F
2015-09-15
A high-throughput, robust and reliable method for simultaneous analysis of five carotenoids, four chlorophylls and one tocopherol was developed for rapid screening large sample populations to facilitate molecular biology and plant breeding. Separation was achieved for 10 known analytes and four unknown carotenoids in a significantly reduced run time of 10min. Identity of the 10 analytes was confirmed by their UV-Vis absorption spectras. Quantification of tocopherol, carotenoids and chlorophylls was performed at 290nm, 460nm and 650nm respectively. In this report, two sub two micron particle core-shell columns, Kinetex from Phenomenex (1.7μm particle size, 12% carbon load) and Cortecs from Waters (1.6μm particle size, 6.6% carbon load) were investigated and their separation efficiencies were evaluated. The peak resolutions were >1.5 for all analytes except for chlorophyll-a' with Cortecs column. The ruggedness of this method was evaluated in two identical but separate instruments that produced CV<2 in peak retentions for nine out of 10 analytes separated. Copyright © 2015 Elsevier B.V. All rights reserved.
Gunn, Josh; Kriger, Scott; Terrell, Andrea R
2010-01-01
The simultaneous determination and quantification of cocaine and its major metabolite, benzoylecgonine, in meconium using UPLC-MS/MS is described. Ultra-performance liquid chromatography (UPLC) is an emerging analytical technique which draws upon the principles of chromatography to run separations at higher flow rates for increased speed, while simultaneously achieving superior resolution and sensitivity. Extraction of cocaine and benzoylecgonine from the homogenized meconium matrix was achieved with a preliminary protein precipitation or protein 'crash' employing cold acetonitrile, followed by a mixed mode solid phase extraction (SPE). Following elution from the SPE cartridge, eluents were dried down under nitrogen, reconstituted in 200 microL of DI water:acetonitrile (ACN) (75:25), and injected onto the UPLC/MS/MS for analysis. The increased speed and separation efficiency afforded by UPLC, allowed for the separation and subsequent quantification of both analytes in less than 2 min. Analytes were quantified using multiple reaction monitoring (MRM) and six-point calibration curves constructed in negative blood. Limits of detection for both analytes were 3 ng/g and the lower limit of quantitation (LLOQ) was 30 ng/g.
NASA Astrophysics Data System (ADS)
Cai, Haibing; Xu, Liuxun; Yang, Yugui; Li, Longqi
2018-05-01
Artificial liquid nitrogen freezing technology is widely used in urban underground engineering due to its technical advantages, such as simple freezing system, high freezing speed, low freezing temperature, high strength of frozen soil, and absence of pollution. However, technical difficulties such as undefined range of liquid nitrogen freezing and thickness of frozen wall gradually emerge during the application process. Thus, the analytical solution of the freezing-temperature field of a single pipe is established considering the freezing temperature of soil and the constant temperature of freezing pipe wall. This solution is then applied in a liquid nitrogen freezing project. Calculation results show that the radius of freezing front of liquid nitrogen is proportional to the square root of freezing time. The radius of the freezing front also decreases with decreased the freezing temperature, and the temperature gradient of soil decreases with increased distance from the freezing pipe. The radius of cooling zone in the unfrozen area is approximately four times the radius of the freezing front. Meanwhile, the numerical simulation of the liquid nitrogen freezing-temperature field of a single pipe is conducted using the Abaqus finite-element program. Results show that the numerical simulation of soil temperature distribution law well agrees with the analytical solution, further verifies the reliability of the established analytical solution of the liquid nitrogen freezing-temperature field of a single pipe.
Athar Masood, M; Veenstra, Timothy D
2017-08-26
Urine Drug Testing (UDT) is an important analytical/bio-analytical technique that has inevitably become an integral and vital part of a testing program for diagnostic purposes. This manuscript presents a tailor-made LC-MS/MS quantitative assay method development and validation for a custom group of 33 pain panel drugs and their metabolites belonging to different classes (opiates, opioids, benzodiazepines, illicit, amphetamines, etc.) that are prescribed in pain management and depressant therapies. The LC-MS/MS method incorporates two experiments to enhance the sensitivity of the assay and has a run time of about 7 min. with no prior purification of the samples required and a flow rate of 0.7 mL/min. The method also includes the second stage metabolites for some drugs that belong to different classes but have first stage similar metabolic pathways that will enable to correctly identify the right drug or to flag the drug that might be due to specimen tampering. Some real case examples and difficulties in peak picking were provided with some of the analytes in subject samples. Finally, the method was deliberated with some randomly selected de-identified clinical subject samples, and the data evaluated from "direct dilute and shoot analysis" and after "glucuronide hydrolysis" were compared. This method is now used to run routinely more than 100 clinical subjects samples on a daily basis. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Friese, K C; Grobecker, K H; Wätjen, U
2001-07-01
A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.
Structural analysis for a 40-story building
NASA Technical Reports Server (NTRS)
Hua, L.
1972-01-01
NASTRAN was chosen as the principal analytical tool for structural analysis of the Illinois Center Plaza Hotel Building in Chicago, Illinois. The building is a 40-story, reinforced concrete structure utilizing a monolithic slab-column system. The displacements, member stresses, and foundation loads due to wind load, live load, and dead load were obtained through a series of NASTRAN runs. These analyses and the input technique are described.
A Single Molecular Beacon Probe Is Sufficient for the Analysis of Multiple Nucleic Acid Sequences
Gerasimova, Yulia V.; Hayson, Aaron; Ballantyne, Jack; Kolpashchikov, Dmitry M.
2010-01-01
Molecular beacon (MB) probes are dual-labeled hairpin-shaped oligodeoxyribonucleotides that are extensively used for real-time detection of specific RNA/DNA analytes. In the MB probe, the loop fragment is complementary to the analyte: therefore, a unique probe is required for the analysis of each new analyte sequence. The conjugation of an oligonucleotide with two dyes and subsequent purification procedures add to the cost of MB probes, thus reducing their application in multiplex formats. Here we demonstrate how one MB probe can be used for the analysis of an arbitrary nucleic acid. The approach takes advantage of two oligonucleotide adaptor strands, each of which contains a fragment complementary to the analyte and a fragment complementary to an MB probe. The presence of the analyte leads to association of MB probe and the two DNA strands in quadripartite complex. The MB probe fluorescently reports the formation of this complex. In this design, the MB does not bind the analyte directly; therefore, the MB sequence is independent of the analyte. In this study one universal MB probe was used to genotype three human polymorphic sites. This approach promises to reduce the cost of multiplex real-time assays and improve the accuracy of single-nucleotide polymorphism genotyping. PMID:20665615
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Negative running can prevent eternal inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinney, William H.; Freese, Katherine, E-mail: whkinney@buffalo.edu, E-mail: ktfreese@umich.edu
Current data from the Planck satellite and the BICEP2 telescope favor, at around the 2 σ level, negative running of the spectral index of curvature perturbations from inflation. We show that for negative running α < 0, the curvature perturbation amplitude has a maximum on scales larger than our current horizon size. A condition for the absence of eternal inflation is that the curvature perturbation amplitude always remain below unity on superhorizon scales. For current bounds on n{sub S} from Planck, this corresponds to an upper bound of the running α < −9 × 10{sup −5}, so that even tiny running of the scalar spectral index ismore » sufficient to prevent eternal inflation from occurring, as long as the running remains negative on scales outside the horizon. In single-field inflation models, negative running is associated with a finite duration of inflation: we show that eternal inflation may not occur even in cases where inflation lasts as long as 10{sup 4} e-folds.« less
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
Universal Plug-n-Play Sensor Integration for Advanced Navigation
2012-03-22
Orientation (top) and Angular Velocity (bottom) . . . . . . . . . 79 IV.6 Execution of AHRS script with roscore running on separate machine . . . . . . 80...single host case only with two hosts in this scenario. The script is running 78 Figure IV.5: Plot of AHRS Orientation (top) and Angular Velocity (bottom...Component-Based System using ROS . . . . . . . . . 59 3.6 Autonomous Behavior Using Scripting . . . . . . . . . . . . . . . . . . . . 60 3.6.1 udev
NASA Technical Reports Server (NTRS)
Ladbury, R.; Reed, R. A.; Marshall, P. W.; LaBel, K. A.; Anantaraman, R.; Fox, R.; Sanderson, D. P.; Stolz, A.; Yurkon, J.; Zeller, A. F.;
2004-01-01
The performance of Michigan State University's Single-Event Effects Test Facility (SEETF) during its inaugural runs is evaluated. Beam profiles and other diagnostics are presented, and prospects for future development and testing are discussed.
In silico Microarray Probe Design for Diagnosis of Multiple Pathogens
2008-10-21
enhancements to an existing single-genome pipeline that allows for efficient design of microarray probes common to groups of target genomes. The...for tens or even hundreds of related genomes in a single run. Hybridization results with an unsequenced B. pseudomallei strain indicate that the
POSTMan (POST-translational modification analysis), a software application for PTM discovery.
Arntzen, Magnus Ø; Osland, Christoffer Leif; Raa, Christopher Rasch-Olsen; Kopperud, Reidun; Døskeland, Stein-Ove; Lewis, Aurélia E; D'Santos, Clive S
2009-03-01
Post-translationally modified peptides present in low concentrations are often not selected for CID, resulting in no sequence information for these peptides. We have developed a software POSTMan (POST-translational Modification analysis) allowing post-translationally modified peptides to be targeted for fragmentation. The software aligns LC-MS runs (MS(1) data) between individual runs or within a single run and isolates pairs of peptides which differ by a user defined mass difference (post-translationally modified peptides). The method was validated for acetylated peptides and allowed an assessment of even the basal protein phosphorylation of phenylalanine hydroxylase (PHA) in intact cells.
Sugiyama, Takashi; Kameda, Mai; Kageyama, Masahiro; Kiba, Kazufusa; Kanehisa, Hiroaki; Maeda, Akira
2014-12-01
The present study aimed to clarify the asymmetry between the dominant (DL) and non-dominant takeoff legs (NDL) in terms of lower limb behavior during running single leg jumps (RSJ) in collegiate male basketball players in relation to that of the jump height. Twenty-seven players performed maximal RSJ with a 6 m approach. Three-dimensional kinematics data during RSJ was collected using a 12 Raptor camera infrared motion analysis system (MAC 3D system) at a sampling frequency of 500 Hz. The symmetry index in the jump heights and the kinematics variables were calculated as {2 × (DL - NDL) / (DL + NDL)} × 100. The run-up velocity was similar between the two legs, but the jump height was significantly higher in the DL than in the NDL. During the takeoff phase, the joint angles of the ankle and knee were significantly larger in the DL than the NDL. In addition, the contact time for the DL was significantly shorter than that for the NDL. The symmetry index of the kinematics for the ankle joint was positively correlated with that of jump height, but that for the knee joint was not. The current results indicate that, for collegiate basketball players, the asymmetry in the height of a RSJ can be attributed to that in the joint kinematics of the ankle during the takeoff phase, which may be associated with the ability to effectively transmit run-up velocity to jump height. Key pointsAsymmetry of height during running single leg jump between two legs is due to the behavior of the ankle joint (i.e. stiffer the ankle joint and explosive bounding).The dominant leg can transmit run-up velocity into the vertical velocity at takeoff phase to jump high compared with the non-dominant leg.Basketball players who have a greater asymmetry of the RSJ at the collegiate level could be assessed as non-regulars judging by the magnitude of asymmetry.
Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.
Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A
2018-05-01
Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Biedron, S. G.; Harris, J. R.
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns
Edelen, J. P.; Biedron, S. G.; Harris, J. R.; ...
2015-04-01
This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.
Design and Optimization of AlN based RF MEMS Switches
NASA Astrophysics Data System (ADS)
Hasan Ziko, Mehadi; Koel, Ants
2018-05-01
Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
Running in the real world: adjusting leg stiffness for different surfaces
NASA Technical Reports Server (NTRS)
Ferris, D. P.; Louie, M.; Farley, C. T.
1998-01-01
A running animal coordinates the actions of many muscles, tendons, and ligaments in its leg so that the overall leg behaves like a single mechanical spring during ground contact. Experimental observations have revealed that an animal's leg stiffness is independent of both speed and gravity level, suggesting that it is dictated by inherent musculoskeletal properties. However, if leg stiffness was invariant, the biomechanics of running (e.g. peak ground reaction force and ground contact time) would change when an animal encountered different surfaces in the natural world. We found that human runners adjust their leg stiffness to accommodate changes in surface stiffness, allowing them to maintain similar running mechanics on different surfaces. These results provide important insight into mechanics and control of animal locomotion and suggest that incorporating an adjustable leg stiffness in the design of hopping and running robots is important if they are to match the agility and speed of animals on varied terrain.
"One-sample concept" micro-combinatory for high throughput TEM of binary films.
Sáfrán, György
2018-04-01
Phases of thin films may remarkably differ from that of bulk. Unlike to the comprehensive data files of Binary Phase Diagrams [1] available for bulk, complete phase maps for thin binary layers do not exist. This is due to both the diverse metastable, non-equilibrium or instable phases feasible in thin films and the required volume of characterization work with analytical techniques like TEM, SAED and EDS. The aim of the present work was to develop a method that remarkably facilitates the TEM study of the diverse binary phases of thin films, or the creation of phase maps. A micro-combinatorial method was worked out that enables both preparation and study of a gradient two-component film within a single TEM specimen. For a demonstration of the technique thin Mn x Al 1- x binary samples with evolving concentration from x = 0 to x = 1 have been prepared so that the transition from pure Mn to pure Al covers a 1.5 mm long track within the 3 mm diameter TEM grid. The proposed method enables the preparation and study of thin combinatorial samples including all feasible phases as a function of composition or other deposition parameters. Contrary to known "combinatorial chemistry", in which a series of different samples are deposited in one run, and investigated, one at a time, the present micro-combinatorial method produces a single specimen condensing a complete library of a binary system that can be studied, efficiently, within a single TEM session. That provides extremely high throughput for TEM characterization of composition-dependent phases, exploration of new materials, or the construction of phase diagrams of binary films. Copyright © 2018 Elsevier B.V. All rights reserved.
Tan, Hui Peng; Wan, Tow Shi; Min, Christina Liew Shu; Osborne, Murray; Ng, Khim Hui
2014-03-14
A selectable one-dimensional ((1)D) or two-dimensional ((2)D) gas chromatography-mass spectrometry (GC-MS) system coupled with flame ionization detector (FID) and olfactory detection port (ODP) was employed in this study to analyze perfume oil and fragrance in shower gel. A split/splitless (SSL) injector and a programmable temperature vaporization (PTV) injector are connected via a 2-way splitter of capillary flow technology (CFT) in this selectable (1)D/(2)D GC-MS/FID/ODP system to facilitate liquid sample injections and thermal desorption (TD) for stir bar sorptive extraction (SBSE) technique, respectively. The dual-linked injectors set-up enable the use of two different injector ports (one at a time) in single sequence run without having to relocate the (1)D capillary column from one inlet to another. Target analytes were separated in (1)D GC-MS/FID/ODP and followed by further separation of co-elution mixture from (1)D in (2)D GC-MS/FID/ODP in single injection without any instrumental reconfiguration. A (1)D/(2)D quantitative analysis method was developed and validated for its repeatability - tR; calculated linear retention indices (LRI); response ratio in both MS and FID signal, limit of detection (LOD), limit of quantitation (LOQ), as well as linearity over a concentration range. The method was successfully applied in quantitative analysis of perfume solution at different concentration level (RSD≤0.01%, n=5) and shower gel spiked with perfume at different dosages (RSD≤0.04%, n=5) with good recovery (96-103% for SSL injection; 94-107% for stir bar sorptive extraction-thermal desorption (SBSE-TD). Copyright © 2014 Elsevier B.V. All rights reserved.
Fully integrated free-running InGaAs/InP single-photon detector for accurate lidar applications.
Yu, Chao; Shangguan, Mingjia; Xia, Haiyun; Zhang, Jun; Dou, Xiankang; Pan, Jian-Wei
2017-06-26
We present a fully integrated InGaAs/InP negative feedback avalanche diode (NFAD) based free-running single-photon detector (SPD) designed for accurate lidar applications. A free-piston Stirling cooler is used to cool down the NFAD with a large temperature range, and an active hold-off circuit implemented in a field programmable gate array is applied to further suppress the afterpulsing contribution. The key parameters of the free-running SPD including photon detection efficiency (PDE), dark count rate (DCR), afterpulse probability, and maximum count rate (MCR) are dedicatedly optimized for lidar application in practice. We then perform a field experiment using a Mie lidar system with 20 kHz pulse repetition frequency to compare the performance between the free-running InGaAs/InP SPD and a commercial superconducting nanowire single-photon detector (SNSPD). Our detector exhibits good performance with 1.6 Mcps MCR (0.6 μs hold-off time), 10% PDE, 950 cps DCR, and 18% afterpulse probability over 50 μs period. Such performance is worse than the SNSPD with 60% PDE and 300 cps DCR. However, after performing a specific algorithm that we have developed for afterpulse and count rate corrections, the lidar system performance in terms of range-corrected signal (Pr 2 ) distribution using our SPD agrees very well with the result using the SNSPD, with only a relative error of ∼2%. Due to the advantages of low-cost and small size of InGaAs/InP NFADs, such detector provides a practical solution for accurate lidar applications.
Factors associated with hit-and-run pedestrian fatalities and driver identification.
MacLeod, Kara E; Griswold, Julia B; Arnold, Lindsay S; Ragland, David R
2012-03-01
As hit-and-run crashes account for a significant proportion of pedestrian fatalities, a better understanding of these crash types will assist efforts to reduce these fatalities. Of the more than 48,000 pedestrian deaths that were recorded in the United States between 1998 and 2007, 18.1% of them were caused by hit-and-run drivers. Using national data on single pedestrian-motor vehicle fatal crashes (1998-2007), logistic regression analyses were conducted to identify factors related to hit-and-run and to identify factors related to the identification of the hit-and-run driver. Results indicate an increased risk of hit-and-run in the early morning, poor light conditions, and on the weekend. There may also be an association between the type of victim and the likelihood of the driver leaving and being identified. Results also indicate that certain driver characteristics, behavior, and driving history are associated with hit-and-run. Alcohol use and invalid license were among the leading driver factor associated with an increased risk of hit-and-run. Prevention efforts that address such issues could substantially reduce pedestrian fatalities as a result of hit-and-run. However, more information about this driver population may be necessary. Copyright © 2011. Published by Elsevier Ltd.
Couvillon, Margaret J.; Phillipps, Hunter L. F.; Schürch, Roger; Ratnieks, Francis L. W.
2012-01-01
The presence of noise in a communication system may be adaptive or may reflect unavoidable constraints. One communication system where these alternatives are debated is the honeybee (Apis mellifera) waggle dance. Successful foragers communicate resource locations to nest-mates by a dance comprising repeated units (waggle runs), which repetitively transmit the same distance and direction vector from the nest. Intra-dance waggle run variation occurs and has been hypothesized as a colony-level adaptation to direct recruits over an area rather than a single location. Alternatively, variation may simply be due to constraints on bees' abilities to orient waggle runs. Here, we ask whether the angle at which the bee dances on vertical comb influences waggle run variation. In particular, we determine whether horizontal dances, where gravity is not aligned with the waggle run orientation, are more variable in their directional component. We analysed 198 dances from foragers visiting natural resources and found support for our prediction. More horizontal dances have greater angular variation than dances performed close to vertical. However, there is no effect of waggle run angle on variation in the duration of waggle runs, which communicates distance. Our results weaken the hypothesis that variation is adaptive and provide novel support for the constraint hypothesis. PMID:22513277
Couvillon, Margaret J; Phillipps, Hunter L F; Schürch, Roger; Ratnieks, Francis L W
2012-08-23
The presence of noise in a communication system may be adaptive or may reflect unavoidable constraints. One communication system where these alternatives are debated is the honeybee (Apis mellifera) waggle dance. Successful foragers communicate resource locations to nest-mates by a dance comprising repeated units (waggle runs), which repetitively transmit the same distance and direction vector from the nest. Intra-dance waggle run variation occurs and has been hypothesized as a colony-level adaptation to direct recruits over an area rather than a single location. Alternatively, variation may simply be due to constraints on bees' abilities to orient waggle runs. Here, we ask whether the angle at which the bee dances on vertical comb influences waggle run variation. In particular, we determine whether horizontal dances, where gravity is not aligned with the waggle run orientation, are more variable in their directional component. We analysed 198 dances from foragers visiting natural resources and found support for our prediction. More horizontal dances have greater angular variation than dances performed close to vertical. However, there is no effect of waggle run angle on variation in the duration of waggle runs, which communicates distance. Our results weaken the hypothesis that variation is adaptive and provide novel support for the constraint hypothesis.
Response of an oscillatory differential delay equation to a single stimulus.
Mackey, Michael C; Tyran-Kamińska, Marta; Walther, Hans-Otto
2017-04-01
Here we analytically examine the response of a limit cycle solution to a simple differential delay equation to a single pulse perturbation of the piecewise linear nonlinearity. We construct the unperturbed limit cycle analytically, and are able to completely characterize the perturbed response to a pulse of positive amplitude and duration with onset at different points in the limit cycle. We determine the perturbed minima and maxima and period of the limit cycle and show how the pulse modifies these from the unperturbed case.
Programs for skyline planning.
Ward W. Carson
1975-01-01
This paper describes four computer programs for the logging engineer's use in planning log harvesting by skyline systems. One program prepares terrain profile plots from maps mounted on a digitizer; the other programs prepare load-carrying capability and other information for single and multispan standing skylines and single span running skylines. In general, the...
Kadić, Elma; Moniz, Raymond J; Huo, Ying; Chi, An; Kariv, Ilona
2017-02-02
Comprehensive understanding of cellular immune subsets involved in regulation of tumor progression is central to the development of cancer immunotherapies. Single cell immunophenotyping has historically been accomplished by flow cytometry (FC) analysis, enabling the analysis of up to 18 markers. Recent advancements in mass cytometry (MC) have facilitated detection of over 50 markers, utilizing high resolving power of mass spectrometry (MS). This study examined an analytical and operational feasibility of MC for an in-depth immunophenotyping analysis of the tumor microenvironment, using the commercial CyTOF™ instrument, and further interrogated challenges in managing the integrity of tumor specimens. Initial longitudinal studies with frozen peripheral blood mononuclear cells (PBMCs) showed minimal MC inter-assay variability over nine independent runs. In addition, detection of common leukocyte lineage markers using MC and FC detection confirmed that these methodologies are comparable in cell subset identification. An advanced multiparametric MC analysis of 39 total markers enabled a comprehensive evaluation of cell surface marker expression in fresh and cryopreserved tumor samples. This comparative analysis revealed significant reduction of expression levels of multiple markers upon cryopreservation. Most notably myeloid derived suppressor cells (MDSC), defined by co-expression of CD66b + and CD15 + , HLA-DR dim and CD14 - phenotype, were undetectable in frozen samples. These results suggest that optimization and evaluation of cryopreservation protocols is necessary for accurate biomarker discovery in frozen tumor specimens.
Nett, Michael; Avelar, Rui; Sheehan, Michael; Cushner, Fred
2011-03-01
Standard medial parapatellar arthrotomies of 10 cadaveric knees were closed with either conventional interrupted absorbable sutures (control group, mean of 19.4 sutures) or a single running knotless bidirectional barbed absorbable suture (experimental group). Water-tightness of the arthrotomy closure was compared by simulating a tense hemarthrosis and measuring arthrotomy leakage over 3 minutes. Mean total leakage was 356 mL and 89 mL in the control and experimental groups, respectively (p = 0.027). Using 8 of the 10 knees (4 closed with control sutures, 4 closed with an experimental suture), a tense hemarthrosis was again created, and iatrogenic suture rupture was performed: a proximal suture was cut at 1 minute; a distal suture was cut at 2 minutes. The impact of suture rupture was compared by measuring total arthrotomy leakage over 3 minutes. Mean total leakage was 601 mL and 174 mL in the control and experimental groups, respectively (p = 0.3). In summary, using a cadaveric model, arthrotomies closed with a single bidirectional barbed running suture were statistically significantly more water-tight than those closed using a standard interrupted technique. The sample size was insufficient to determine whether the two closure techniques differed in leakage volume after suture rupture.
Leduc, Renee Y M; Rauw, Gail; Baker, Glen B; McDermid, Heather E
2017-01-01
Environmental enrichment items such as running wheels can promote the wellbeing of laboratory mice. Growing evidence suggests that wheel running simulates exercise effects in many mouse models of human conditions, but this activity also might change other aspects of mouse behavior. In this case study, we show that the presence of running wheels leads to pronounced and permanent circling behavior with route-tracing in a proportion of the male mice of a genetically distinct cohort. The genetic background of this cohort includes a mutation in Arhgap19, but genetic crosses showed that an unknown second-site mutation likely caused the induced circling behavior. Behavioral tests for inner-ear function indicated a normal sense of gravity in the circling mice. However, the levels of dopamine, serotonin, and some dopamine metabolites were lower in the brains of circling male mice than in mice of the same genetic background that were weaned without wheels. Circling was seen in both singly and socially housed male mice. The additional stress of fighting may have exacerbated the predisposition to circling in the socially housed animals. Singly and socially housed male mice without wheels did not circle. Our current findings highlight the importance and possibly confounding nature of the environmental and genetic background in mouse behavioral studies, given that the circling behavior and alterations in dopamine and serotonin levels in this mouse cohort occurred only when the male mice were housed with running wheels. PMID:28315651
Leg stiffness and stride frequency in human running.
Farley, C T; González, O
1996-02-01
When humans and other mammals run, the body's complex system of muscle, tendon and ligament springs behaves like a single linear spring ('leg spring'). A simple spring-mass model, consisting of a single linear leg spring and a mass equivalent to the animal's mass, has been shown to describe the mechanics of running remarkably well. Force platform measurements from running animals, including humans, have shown that the stiffness of the leg spring remains nearly the same at all speeds and that the spring-mass system is adjusted for higher speeds by increasing the angle swept by the leg spring. The goal of the present study is to determine the relative importance of changes to the leg spring stiffness and the angle swept by the leg spring when humans alter their stride frequency at a given running speed. Human subjects ran on treadmill-mounted force platform at 2.5ms-1 while using a range of stride frequencies from 26% below to 36% above the preferred stride frequency. Force platform measurements revealed that the stiffness of the leg spring increased by 2.3-fold from 7.0 to 16.3 kNm-1 between the lowest and highest stride frequencies. The angle swept by the leg spring decreased at higher stride frequencies, partially offsetting the effect of the increased leg spring stiffness on the mechanical behavior of the spring-mass system. We conclude that the most important adjustment to the body's spring system to accommodate higher stride frequencies is that leg spring becomes stiffer.
Dynamic performance of a suspended reinforced concrete footbridge under pedestrian movements
NASA Astrophysics Data System (ADS)
Drygala, I.; Dulinska, J.; Kondrat, K.
2018-02-01
In the paper the dynamic analysis of a suspended reinforced concrete footbridge over a national road located in South Poland was carried out. Firstly, modes and values of natural frequencies of vibration of the structure were calculated. The results of the numerical modal investigation shown that the natural frequencies of the structure coincided with the frequency of human beings during motion steps (walking fast or running). Hence, to consider the comfort standards, the dynamic response of the footbridge to a runner dynamic motion should be calculated. Secondly, the dynamic response of the footbridge was calculated taking into consideration two models of dynamic forces produced by a single running pedestrian: a ‘sine’ and ‘half-sine’ model. It occurred that the values of accelerations and displacements obtained for the ‘half-sine’ model of dynamic forces were greater than those obtained for the ‘sine’ model up 20%. The ‘sine’ model is appropriate only for walking users of the walkways, because the nature of their motion has continues characteristic. In the case of running users of walkways this theory is unfitting, since the forces produced by a running pedestrian has a discontinuous nature. In this scenario of calculations, a ‘half-sine’ model seemed to be more effective. Finally, the comfort conditions for the footbridge were evaluated. The analysis proved that the vertical comfort criteria were not exceeded for a single user of footbridge running or walking fast.
Automated Deployment of Advanced Controls and Analytics in Buildings
NASA Astrophysics Data System (ADS)
Pritoni, Marco
Buildings use 40% of primary energy in the US. Recent studies show that developing energy analytics and enhancing control strategies can significantly improve their energy performance. However, the deployment of advanced control software applications has been mostly limited to academic studies. Larger-scale implementations are prevented by the significant engineering time and customization required, due to significant differences among buildings. This study demonstrates how physics-inspired data-driven models can be used to develop portable analytics and control applications for buildings. Specifically, I demonstrate application of these models in all phases of the deployment of advanced controls and analytics in buildings: in the first phase, "Site Preparation and Interface with Legacy Systems" I used models to discover or map relationships among building components, automatically gathering metadata (information about data points) necessary to run the applications. During the second phase: "Application Deployment and Commissioning", models automatically learn system parameters, used for advanced controls and analytics. In the third phase: "Continuous Monitoring and Verification" I utilized models to automatically measure the energy performance of a building that has implemented advanced control strategies. In the conclusions, I discuss future challenges and suggest potential strategies for these innovative control systems to be widely deployed in the market. This dissertation provides useful new tools in terms of procedures, algorithms, and models to facilitate the automation of deployment of advanced controls and analytics and accelerate their wide adoption in buildings.
Couvillon, Margaret J; Riddell Pearce, Fiona C; Harris-Jones, Elisabeth L; Kuepfer, Amanda M; Mackenzie-Smith, Samantha J; Rozario, Laura A; Schürch, Roger; Ratnieks, Francis L W
2012-05-15
Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances.
Couvillon, Margaret J.; Riddell Pearce, Fiona C.; Harris-Jones, Elisabeth L.; Kuepfer, Amanda M.; Mackenzie-Smith, Samantha J.; Rozario, Laura A.; Schürch, Roger; Ratnieks, Francis L. W.
2012-01-01
Summary Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances. PMID:23213438
Wave run-up on a high-energy dissipative beach
Ruggiero, P.; Holman, R.A.; Beach, R.A.
2004-01-01
Because of highly dissipative conditions and strong alongshore gradients in foreshore beach morphology, wave run-up data collected along the central Oregon coast during February 1996 stand in contrast to run-up data currently available in the literature. During a single data run lasting approximately 90 min, the significant vertical run-up elevation varied by a factor of 2 along the 1.6 km study site, ranging from 26 to 61% of the offshore significant wave height, and was found to be linearly dependent on the local foreshore beach slope that varied by a factor of 5. Run-up motions on this high-energy dissipative beach were dominated by infragravity (low frequency) energy with peak periods of approximately 230 s. Incident band energy levels were 2.5 to 3 orders of magnitude lower than the low-frequency spectral peaks and typically 96% of the run-up variance was in the infragravity band. A broad region of the run-up spectra exhibited an f-4 roll off, typical of saturation, extending to frequencies lower than observed in previous studies. The run-up spectra were dependent on beach slope with spectra for steeper foreshore slopes shifted toward higher frequencies than spectra for shallower foreshore slopes. At infragravity frequencies, run-up motions were coherent over alongshore length scales in excess of 1 km, significantly greater than decorrelation length scales on moderate to reflective beaches. Copyright 2004 by the American Geophysical Union.
Ameer, Mariam A; Muaidi, Qassim I
2017-09-01
The relationship between knee kinematics and knee-ankle kinetics during the landing phase of single leg jumping has been widely studied to identify proper strategies for preventing non-contact ACL injury. However, there is a lack of study on knee-ankle kinetics at peak knee flexion angle during jumping from running. Hence, the purpose of this study is to establish the relationship between peak knee flexion angle, knee extension moment, ankle plantar flexion moment and ground reaction force in handball players in order to protect ACL from excessive stress during single leg jumping. In addition, the study also clarifies the role of calf muscles in relieving part of ACL stresses with different knee flexion angles during landing. Fifteen active male elite handball players of Saudi Arabia have participated in this study (Age = 22.6 ± 3.5years, Height = 182 ± 3.7 cm, Weight = 87.5 ± 10.2 kg). The players performed three successful landings of single-leg jump following running a fixed distance of about 450cm. The data were collected using a 3D motion capture and analysis system (VICON). Pearson product moment correlation coefficients showed that greater peak knee flexion angle is related significantly to both lesser knee extension moment (r = -.623, P = .013) and vertical component of ground reaction force (VGRF) (r = -.688, P = .005) in landing phase. Moreover, increasing the peak knee flexion angle in landing phase tends to increase the ankle plantar flexion moment significantly (r = .832, P = .000). With an increase of the peak knee flexion angle during single leg jump landing from running, there would be less knee extension moment, low impact force and more plantar flexion moment. As such, the clinical implication of this study is that there may be a possible protective mechanism by increasing the knee flexion angle during landing phase, which tends to protect the ACL from vigorous strain and injuries.
Kim, Nam Hoon; Hwang, Wooseup; Baek, Kangkyun; Rohman, Md Rumum; Kim, Jeehong; Kim, Hyun Woo; Mun, Jungho; Lee, So Young; Yun, Gyeongwon; Murray, James; Ha, Ji Won; Rho, Junsuk; Moskovits, Martin; Kim, Kimoon
2018-04-04
Single-molecule surface-enhanced Raman spectroscopy (SERS) offers new opportunities for exploring the complex chemical and biological processes that cannot be easily probed using ensemble techniques. However, the ability to place the single molecule of interest reliably within a hot spot, to enable its analysis at the single-molecule level, remains challenging. Here we describe a novel strategy for locating and securing a single target analyte in a SERS hot spot at a plasmonic nanojunction. The "smart" hot spot was generated by employing a thiol-functionalized cucurbit[6]uril (CB[6]) as a molecular spacer linking a silver nanoparticle to a metal substrate. This approach also permits one to study molecules chemically reluctant to enter the hot spot, by conjugating them to a moiety, such as spermine, that has a high affinity for CB[6]. The hot spot can accommodate at most a few, and often only a single, analyte molecule. Bianalyte experiments revealed that one can reproducibly treat the SERS substrate such that 96% of the hot spots contain a single analyte molecule. Furthermore, by utilizing a series of molecules each consisting of spermine bound to perylene bisimide, a bright SERS molecule, with polymethylene linkers of varying lengths, the SERS intensity as a function of distance from the center of the hot spot could be measured. The SERS enhancement was found to decrease as 1 over the square of the distance from the center of the hot spot, and the single-molecule SERS cross sections were found to increase with AgNP diameter.
Gait biomechanics of skipping are substantially different than those of running.
McDonnell, Jessica; Willson, John D; Zwetsloot, Kevin A; Houmard, Joseph; DeVita, Paul
2017-11-07
The inherit injury risk associated with high-impact exercises calls for alternative ways to achieve the benefits of aerobic exercise while minimizing excessive stresses to body tissues. Skipping presents such an alternative, incorporating double support, flight, and single support phases. We used ground reaction forces (GRFs), lower extremity joint torques and powers to compare skipping and running in 20 healthy adults. The two consecutive skipping steps on each limb differed significantly from each other, and from running. Running had the longest step length, the highest peak vertical GRF, peak knee extensor torque, and peak knee negative and positive power and negative and positive work. Skipping had the greater cadence, peak horizontal GRF, peak hip and ankle extensor torques, peak ankle negative power and work, and peak ankle positive power. The second vs first skipping step had the shorter step length, higher cadence, peak horizontal GRF, peak ankle extensor torque, and peak ankle negative power, negative work, and positive power and positive work. The first skipping step utilized predominately net negative joint work (eccentric muscle action) while the second utilized predominately net positive joint work (concentric muscle action). The skipping data further highlight the persistence of net negative work performed at the knee and net positive work performed at the ankle across locomotion gaits. Evidence of step segregation was seen in distribution of the braking and propelling impulses and net work produced across the hip, knee, and ankle joints. Skipping was substantially different than running and was temporally and spatially asymmetrical with successive foot falls partitioned into a dominant function, either braking or propelling whereas running had a single, repeated step in which both braking and propelling actions were performed equally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yamamoto, Shinobu; Matsumoto, Akiko; Yui, Yuko; Miyazaki, Shota; Kumagai, Shinji; Hori, Hajime; Ichiba, Masayoshi
2018-03-27
N,N-Dimethylacetamide (DMAC) is widely used in industry as a solvent. It can be absorbed through human skin. Therefore, it is necessary to determine exposure to DMAC via biological monitoring. However, the precision of traditional gas chromatography (GC) is low due to the thermal decomposition of metabolites in the high-temperature GC injection port. To overcome this problem, we have developed a new method for the simultaneous separation and quantification of urinary DMAC metabolites using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Urine samples were diluted 10-fold in formic acid, and 1-μl aliquots were injected into the LC-MS/MS equipment. A C18 reverse-phase Octa Decyl Silyl (ODS) column was used as the analytical column, and the mobile phase consisted of a mixture of methanol and aqueous formic acid solution. Urinary concentrations of DMAC and its known metabolites (N-hydroxymethyl-N-methylacetamide (DMAC-OH), N-methylacetamide (NMAC), and S- (acetamidomethyl) mercapturic acid (AMMA) ) were determined in a single run. The dynamic ranges of the calibration curves were 0.05-5 mg/l (r≥0.999) for all four compounds. The limits of detection for DMAC, DMAC-OH, NMAC, and AMMA in urine were 0.04, 0.02, 0.05, and 0.02 mg/l, respectively. Within-run accuracies were 96.5%-109.6% with relative standard deviations of precision being 3.43%-10.31%. The results demonstrated that the proposed method could successfully quantify low concentrations of DMAC and its metabolites with high precision. Hence, this method is useful for evaluating DMAC exposure. In addition, this method can be used to examine metabolite behaviors in human bodies after exposure and to select appropriate biomarkers.
Paz, Sylvia H; Spritzer, Karen L; Morales, Leo S; Hays, Ron D
2013-09-01
To evaluate the equivalence of the PROMIS(®) physical functioning item bank by language of administration (English versus Spanish). The PROMIS(®) wave 1 English-language physical functioning bank consists of 124 items, and 114 of these were translated into Spanish. Item frequencies, means and standard deviations, item-scale correlations, and internal consistency reliability were calculated. The IRT assumption of unidimensionality was evaluated by fitting a single-factor confirmatory factor analytic model. IRT threshold and discrimination parameters were estimated using Samejima's Graded Response Model. DIF by language of administration was evaluated. Item means ranged from 2.53 (SD = 1.36) to 4.62 (SD = 0.82). Coefficient alpha was 0.99, and item-rest correlations ranged from 0.41 to 0.89. A one-factor model fits the data well (CFI = 0.971, TLI = 0.970, and RMSEA = 0.052). The slope parameters ranged from 0.45 ("Are you able to run 10 miles?") to 4.50 ("Are you able to put on a shirt or blouse?"). The threshold parameters ranged from -1.92 ("How much do physical health problems now limit your usual physical activities (such as walking or climbing stairs)?") to 6.06 ("Are you able to run 10 miles?"). Fifty of the 114 items were flagged for DIF based on an R(2) of 0.02 or above criterion. The expected total score was higher for Spanish- than English-language respondents. English- and Spanish-speaking subjects with the same level of underlying physical function responded differently to 50 of 114 items. This study has important implications in the study of physical functioning among diverse populations.
Yamamoto, Shinobu; Matsumoto, Akiko; Yui, Yuko; Miyazaki, Shota; Kumagai, Shinji; Hori, Hajime; Ichiba, Masayoshi
2017-01-01
Objectives: N,N-Dimethylacetamide (DMAC) is widely used in industry as a solvent. It can be absorbed through human skin. Therefore, it is necessary to determine exposure to DMAC via biological monitoring. However, the precision of traditional gas chromatography (GC) is low due to the thermal decomposition of metabolites in the high-temperature GC injection port. To overcome this problem, we have developed a new method for the simultaneous separation and quantification of urinary DMAC metabolites using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Methods: Urine samples were diluted 10-fold in formic acid, and 1-μl aliquots were injected into the LC-MS/MS equipment. A C18 reverse-phase Octa Decyl Silyl (ODS) column was used as the analytical column, and the mobile phase consisted of a mixture of methanol and aqueous formic acid solution. Results: Urinary concentrations of DMAC and its known metabolites (N-hydroxymethyl-N-methylacetamide (DMAC-OH), N-methylacetamide (NMAC), and S- (acetamidomethyl) mercapturic acid (AMMA) ) were determined in a single run. The dynamic ranges of the calibration curves were 0.05-5 mg/l (r≥0.999) for all four compounds. The limits of detection for DMAC, DMAC-OH, NMAC, and AMMA in urine were 0.04, 0.02, 0.05, and 0.02 mg/l, respectively. Within-run accuracies were 96.5%-109.6% with relative standard deviations of precision being 3.43%-10.31%. Conclusions: The results demonstrated that the proposed method could successfully quantify low concentrations of DMAC and its metabolites with high precision. Hence, this method is useful for evaluating DMAC exposure. In addition, this method can be used to examine metabolite behaviors in human bodies after exposure and to select appropriate biomarkers. PMID:29213009
Jourdil, Jean-François; Némoz, Benjamin; Gautier-Veyret, Elodie; Romero, Charlotte; Stanke-Labesque, Françoise
2018-03-30
Adalimumab (ADA) and infliximab (IFX) are therapeutic monoclonal antibodies (TMabs) targeting tumor necrosis factor-alpha (TNFα). They are used to treat inflammatory diseases. Clinical trials have suggested that therapeutic drug monitoring for ADA or IFX could improve treatment response and cost-effectiveness. However, ADA and IFX were quantified by ELISA in all these studies, and the discrepancies between the results obtained raise questions about their reliability.We describe here the validation of a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the simultaneous quantification of ADA and IFX in human samples. Full-length antibodies labeled with stable isotopes were added to plasma samples as an internal standard. Samples were then prepared using Mass Spectrometry Immuno Assay (MSIA) followed by trypsin digestion prior ADA and IFX quantification by LC-MS/MS.ADA and IFX were quantified in serum from patients treated with ADA (n=21) or IFX (n=22), and the concentrations obtained were compared with those obtained with a commercial ELISA kit. The chromatography run lasted 8.6 minutes and the quantification range was 1 to 26 mg/L. The method was reproducible, repeatable and accurate. For both levels of internal quality control, for ADA and IFX inter and intra-day coefficients of variation and accuracies were all within 15%, in accordance with FDA recommendations. No significant cross-contamination effect was noted.Good agreement was found between LC-MS/MS and ELISA results, for both ADA and IFX. This LC-MS/MS method can be used for the quantification of ADA and IFX in a single analytical run and for the optimization of LC-MS/MS resource use in clinical pharmacology laboratories.
NASA Technical Reports Server (NTRS)
Baaklini, George Y.; Smith, Kevin; Raulerson, David; Gyekenyesi, Andrew L.; Sawicki, Jerzy T.; Brasche, Lisa
2003-01-01
Tools for Engine Diagnostics is a major task in the Propulsion System Health Management area of the Single Aircraft Accident Prevention project under NASA s Aviation Safety Program. The major goal of the Aviation Safety Program is to reduce fatal aircraft accidents by 80 percent within 10 years and by 90 percent within 25 years. The goal of the Propulsion System Health Management area is to eliminate propulsion system malfunctions as a primary or contributing factor to the cause of aircraft accidents. The purpose of Tools for Engine Diagnostics, a 2-yr-old task, is to establish and improve tools for engine diagnostics and prognostics that measure the deformation and damage of rotating engine components at the ground level and that perform intermittent or continuous monitoring on the engine wing. In this work, nondestructive-evaluation- (NDE-) based technology is combined with model-dependent disk spin experimental simulation systems, like finite element modeling (FEM) and modal norms, to monitor and predict rotor damage in real time. Fracture mechanics time-dependent fatigue crack growth and damage-mechanics-based life estimation are being developed, and their potential use investigated. In addition, wireless eddy current and advanced acoustics are being developed for on-wing and just-in-time NDE engine inspection to provide deeper access and higher sensitivity to extend on-wing capabilities and improve inspection readiness. In the long run, these methods could establish a base for prognostic sensing while an engine is running, without any overt actions, like inspections. This damage-detection strategy includes experimentally acquired vibration-, eddy-current- and capacitance-based displacement measurements and analytically computed FEM-, modal norms-, and conventional rotordynamics-based models of well-defined damages and critical mass imbalances in rotating disks and rotors.
A Single Bout of Moderate Aerobic Exercise Improves Motor Skill Acquisition.
Statton, Matthew A; Encarnacion, Marysol; Celnik, Pablo; Bastian, Amy J
2015-01-01
Long-term exercise is associated with improved performance on a variety of cognitive tasks including attention, executive function, and long-term memory. Remarkably, recent studies have shown that even a single bout of aerobic exercise can lead to immediate improvements in declarative learning and memory, but less is known about the effect of exercise on motor learning. Here we sought to determine the effect of a single bout of moderate intensity aerobic exercise on motor skill learning. In experiment 1, we investigated the effect of moderate aerobic exercise on motor acquisition. 24 young, healthy adults performed a motor learning task either immediately after 30 minutes of moderate intensity running, after running followed by a long rest period, or after slow walking. Motor skill was assessed via a speed-accuracy tradeoff function to determine how exercise might differentially affect two distinct components of motor learning performance: movement speed and accuracy. In experiment 2, we investigated both acquisition and retention of motor skill across multiple days of training. 20 additional participants performed either a bout of running or slow walking immediately before motor learning on three consecutive days, and only motor learning (no exercise) on a fourth day. We found that moderate intensity running led to an immediate improvement in motor acquisition for both a single session and on multiple sessions across subsequent days, but had no effect on between-day retention. This effect was driven by improved movement accuracy, as opposed to speed. However, the benefit of exercise was dependent upon motor learning occurring immediately after exercise-resting for a period of one hour after exercise diminished the effect. These results demonstrate that moderate intensity exercise can prime the nervous system for the acquisition of new motor skills, and suggest that similar exercise protocols may be effective in improving the outcomes of movement rehabilitation programs.
A Single Bout of Moderate Aerobic Exercise Improves Motor Skill Acquisition
Statton, Matthew A.; Encarnacion, Marysol; Celnik, Pablo; Bastian, Amy J.
2015-01-01
Long-term exercise is associated with improved performance on a variety of cognitive tasks including attention, executive function, and long-term memory. Remarkably, recent studies have shown that even a single bout of aerobic exercise can lead to immediate improvements in declarative learning and memory, but less is known about the effect of exercise on motor learning. Here we sought to determine the effect of a single bout of moderate intensity aerobic exercise on motor skill learning. In experiment 1, we investigated the effect of moderate aerobic exercise on motor acquisition. 24 young, healthy adults performed a motor learning task either immediately after 30 minutes of moderate intensity running, after running followed by a long rest period, or after slow walking. Motor skill was assessed via a speed-accuracy tradeoff function to determine how exercise might differentially affect two distinct components of motor learning performance: movement speed and accuracy. In experiment 2, we investigated both acquisition and retention of motor skill across multiple days of training. 20 additional participants performed either a bout of running or slow walking immediately before motor learning on three consecutive days, and only motor learning (no exercise) on a fourth day. We found that moderate intensity running led to an immediate improvement in motor acquisition for both a single session and on multiple sessions across subsequent days, but had no effect on between-day retention. This effect was driven by improved movement accuracy, as opposed to speed. However, the benefit of exercise was dependent upon motor learning occurring immediately after exercise–resting for a period of one hour after exercise diminished the effect. These results demonstrate that moderate intensity exercise can prime the nervous system for the acquisition of new motor skills, and suggest that similar exercise protocols may be effective in improving the outcomes of movement rehabilitation programs. PMID:26506413