Science.gov

Sample records for accurate timely information

  1. The utility of accurate mass and LC elution time information in the analysis of complex proteomes

    SciTech Connect

    Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.

    2005-08-01

    Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.

  2. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  3. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  4. 78 FR 34604 - Submitting Complete and Accurate Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  5. Student Information Systems Demystified: The Increasing Demand for Accurate, Timely Data Means Schools and Districts Are Relying Heavily on SIS Technologies

    ERIC Educational Resources Information Center

    McIntire, Todd

    2004-01-01

    Student information systems, one of the first applications of computer technology in education, are undergoing a significant transition yet again. The first major shift in SIS technologies occurred about 15 years ago when they evolved from mainframe programs to client-server solutions. Now, vendors across the board are offering centralized…

  6. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  7. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  8. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  9. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  10. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  11. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  12. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  13. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  14. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  15. It's About Time: How Accurate Can Geochronology Become?

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Baldwin, S.; Caffee, M. W.; Gehrels, G. E.; Schoene, B.; Shuster, D. L.; Singer, B. S.

    2015-12-01

    As isotope ratio precisions have improved to as low as ±1 ppm, geochronologic precision has remained essentially unchanged. This largely reflects the nature of radioactivity whereby the parent decays into a different chemical species thus putting as much emphasis on the determining inter-element ratios as isotopic. Even the best current accuracy grows into errors of >0.6 m.y. during the Paleozoic - a span of time equal to ¼ of the Pleistocene. If we are to understand the nature of Paleozoic species variation and climate change at anything like the Cenozoic, we need a 10x improvement in accuracy. The good news is that there is no physical impediment to realizing this. There are enough Pb* atoms in the outer few μm's of a Paleozoic zircon grown moments before eruption to permit ±0.01% accuracy in the U-Pb system. What we need are the resources to synthesize the spikes, enhance ionization yields, exploit microscale sampling, and improve knowledge of λ correspondingly. Despite advances in geochronology over the past 40 years (multicollection, multi-isotope spikes, in situ dating), our ability to translate a daughter atom into a detected ion has remained at the level of 1% or so. This means that a ~102 increase in signal can be achieved before we approach a physical limit. Perhaps the most promising approach is use of broad spectrum lasers that can ionize all neutrals. Radical new approaches to providing mass separation of such signals are emerging, including trapped ion cyclotron resonance and multi-turn, sputtered neutral TOF spectrometers capable of mass resolutions in excess of 105. These innovations hold great promise in geochronology but are largely being developed for cosmochemistry. This may make sense at first glance as cosmochemists are classically atom-limited (IDPs, stardust) but can be a misperception as the outer few μm's of a zircon may represent no more mass than a stardust mote. To reach the fundamental limits of geochronologic signals we need to

  16. Improvements in Accurate GPS Positioning Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Koyama, Yuichiro; Tanaka, Toshiyuki

    Although the Global Positioning System (GPS) is used widely in car navigation systems, cell phones, surveying, and other areas, several issues still exist. We focus on the continuous data received in public use of GPS, and propose a new positioning algorithm that uses time series analysis. By fitting an autoregressive model to the time series model of the pseudorange, we propose an appropriate state-space model. We apply the Kalman filter to the state-space model and use the pseudorange estimated by the filter in our positioning calculations. The results of the authors' positioning experiment show that the accuracy of the proposed method is much better than that of the standard method. In addition, as we can obtain valid values estimated by time series analysis using the state-space model, the proposed state-space model can be applied to several other fields.

  17. Time-accurate Navier-Stokes calculations with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Sanetrik, Mark D.; Atkins, Harold L.

    1993-01-01

    An efficient method for calculating unsteady flows is presented, with emphasis on a modified version of the thin-layer Navier-Stokes equations. Fourier stability analysis is used to illustrate the effect of treating the source term implicitly instead of explicity, as well as to illustrate other algorithmic choices. A 2D circular cylinder (with a Reynolds number of 1200 and a Mach number of 0.3) is calculated. The present scheme requires only about 10 percent of the computer time required by global minimum time stepping.

  18. Automatic and Accurate Shadow Detection Using Near-Infrared Information.

    PubMed

    Rüfenacht, Dominic; Fredembach, Clément; Süsstrunk, Sabine

    2014-08-01

    We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.

  19. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  20. Simple tunnel diode circuit for accurate zero crossing timing

    NASA Technical Reports Server (NTRS)

    Metz, A. J.

    1969-01-01

    Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.

  1. Dead time, pileup, and accurate gamma-ray spectrometry

    SciTech Connect

    Lindstrom, R.M.; Fleming, R.F.

    1995-12-31

    The accuracy of gamma-ray spectrometric measurements is ultimately limited by the precision of Poisson counting statistics. With careful attention to detail, all other sources of error in the ratio of activities of two sources of a radionuclide in small samples can be made insignificant, even when the statistical limit is well below one percent. An important source of error comes from the finite time required by the counting electronics to detect and process pulses. Dead-time losses (mostly in the analog-digital converter) are usually compensated very well by the pulse-height analyzer, but pileup losses (mostly in the amplifier) may not be. Errors of 10% or more may easily result. Several methods are available for detecting and correcting rate-related losses. These methods are sufficiently reliable and well understood that a decaying source can be measured with acceptably small errors even at count rates as high as tens of thousands per second.

  2. A new accurate pill recognition system using imprint information

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  3. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  4. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    SciTech Connect

    Michael Gallis, Steve Plimpton

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners, and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.

  5. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners,more » and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.« less

  6. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Reasonable steps to assure information is... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is... Administers § 1101.32 Reasonable steps to assure information is accurate. (a) The Commission considers...

  7. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Reasonable steps to assure information is... Reasonable Steps Commission Will Take To Assure Information It Discloses Is Accurate, and That Disclosure Is... Administers § 1101.32 Reasonable steps to assure information is accurate. (a) The Commission considers...

  8. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  9. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. PMID:27174312

  10. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  11. Capturing Accurate and Useful Information on Medication-Related Telenursing Triage Calls.

    PubMed

    Lake, R; Li, L; Baysari, M; Byrne, M; Robinson, M; Westbrook, J I

    2016-01-01

    Registered nurses providing telenursing triage and advice services record information on the medication related calls they handle. However the quality and consistency of these data were rarely examined. Our aim was to examine medication related calls made to the healthdirect advice service in November 2014, to assess their basic characteristics and how the data entry format influenced information collected and data consistency. Registered nurses selected the patient question type from a range of categories, and entered the medications involved in a free text field. Medication names were manually extracted from the free text fields. We also compared the selected patient question type with the free text description of the call, in order to gauge data consistency. Results showed that nurses provided patients with advice on medication-related queries in a timely matter (the median call duration of 9 minutes). From 1835 calls, we were able to identify and classify 2156 medications into 384 generic names. However, in 204 cases (11.2% of calls) no medication name was entered. A further 308 (15.0%) of the medication names entered were not identifiable. When we compared the selected patient question with the free text description of calls, we found that these were consistent in 63.27% of cases. Telenursing and triage advice services provide a valuable resource to the public with quick and easily accessible advice. To support nurses provide quality services and record accurate information about the queries, appropriate data entry format and design would be beneficial. PMID:27440292

  12. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  13. Time-of-flight accurate mass spectrometry identification of quinoline alkaloids in honey.

    PubMed

    Rodríguez-Cabo, Tamara; Moniruzzaman, Mohammed; Rodríguez, Isaac; Ramil, María; Cela, Rafael; Gan, Siew Hua

    2015-08-01

    Time-of-flight accurate mass spectrometry (TOF-MS), following a previous chromatographic (gas or liquid chromatography) separation step, is applied to the identification and structural elucidation of quinoline-like alkaloids in honey. Both electron ionization (EI) MS and positive electrospray (ESI+) MS spectra afforded the molecular ions (M(.+) and M+H(+), respectively) of target compounds with mass errors below 5 mDa. Scan EI-MS and product ion scan ESI-MS/MS spectra permitted confirmation of the existence of a quinoline ring in the structures of the candidate compounds. Also, the observed fragmentation patterns were useful to discriminate between quinoline derivatives having the same empirical formula but different functionalities, such as aldoximes and amides. In the particular case of phenylquinolines, ESI-MS/MS spectra provided valuable clues regarding the position of the phenyl moiety attached to the quinoline ring. The aforementioned spectral information, combined with retention times matching, led to the identification of quinoline and five quinoline derivatives, substituted at carbon number 4, in honey samples. An isomer of phenyquinoline was also noticed; however, its exact structure could not be established. Liquid-liquid microextraction and gas chromatography (GC) TOF-MS were applied to the screening of the aforementioned compounds in a total of 62 honeys. Species displaying higher occurrence frequencies were 4-quinolinecarbonitrile, 4-quinolinecarboxaldehyde, 4-quinolinealdoxime, and the phenylquinoline isomer. The Pearson test revealed strong correlations among the first three compounds. PMID:26041455

  14. Accurate mass tag retention time database for urine proteome analysis by chromatography--mass spectrometry.

    PubMed

    Agron, I A; Avtonomov, D M; Kononikhin, A S; Popov, I A; Moshkovskii, S A; Nikolaev, E N

    2010-05-01

    Information about peptides and proteins in urine can be used to search for biomarkers of early stages of various diseases. The main technology currently used for identification of peptides and proteins is tandem mass spectrometry, in which peptides are identified by mass spectra of their fragmentation products. However, the presence of the fragmentation stage decreases sensitivity of analysis and increases its duration. We have developed a method for identification of human urinary proteins and peptides. This method based on the accurate mass and time tag (AMT) method does not use tandem mass spectrometry. The database of AMT tags containing more than 1381 AMT tags of peptides has been constructed. The software for database filling with AMT tags, normalizing the chromatograms, database application for identification of proteins and peptides, and their quantitative estimation has been developed. The new procedures for peptide identification by tandem mass spectra and the AMT tag database are proposed. The paper also lists novel proteins that have been identified in human urine for the first time. PMID:20632944

  15. Time-accurate unsteady aerodynamic and aeroelastic calculations for wings using Euler equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    1988-01-01

    A time-accurate approach to simultaneously solve the Euler flow equations and modal structural equations of motion is presented for computing aeroelastic responses of wings. The Euler flow eauations are solved by a time-accurate finite difference scheme with dynamic grids. The coupled aeroelastic equations of motion are solved using the linear acceleration method. The aeroelastic configuration adaptive dynamic grids are time accurately generated using the aeroelastically deformed shape of the wing. The unsteady flow calculations are validated wih experiment, both for a semi-infinite wing and a wall-mounted cantilever rectangular wings. Aeroelastic responses are computed for a rectangular wing using the modal data generated by the finite-element method. The robustness of the present approach in computing unsteady flows and aeroelastic responses that are beyond the capability of earlier approaches using the potential equations are demonstrated.

  16. Accurate GPS Time-Linked data Acquisition System (ATLAS II) user's manual.

    SciTech Connect

    Jones, Perry L.; Zayas, Jose R.; Ortiz-Moyet, Juan

    2004-02-01

    The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.

  17. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  18. Accurate time propagation method for the coupled Maxwell and Kohn-Sham equations

    NASA Astrophysics Data System (ADS)

    Li, Yonghui; He, Shenglai; Russakoff, Arthur; Varga, Kálmán

    2016-08-01

    An accurate method for time propagation of the coupled Maxwell and time-dependent Kohn-Sham (TDKS) equation is presented. The new approach uses a simultaneous fourth-order Runge-Kutta-based propagation of the vector potential and the Kohn-Sham orbitals. The approach is compared to the conventional fourth-order Taylor propagation and predictor-corrector methods. The calculations show several computational and numerical advantages, including higher computational performance, greater stability, better accuracy, and faster convergence.

  19. Accurate time propagation method for the coupled Maxwell and Kohn-Sham equations.

    PubMed

    Li, Yonghui; He, Shenglai; Russakoff, Arthur; Varga, Kálmán

    2016-08-01

    An accurate method for time propagation of the coupled Maxwell and time-dependent Kohn-Sham (TDKS) equation is presented. The new approach uses a simultaneous fourth-order Runge-Kutta-based propagation of the vector potential and the Kohn-Sham orbitals. The approach is compared to the conventional fourth-order Taylor propagation and predictor-corrector methods. The calculations show several computational and numerical advantages, including higher computational performance, greater stability, better accuracy, and faster convergence. PMID:27627419

  20. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    NASA Astrophysics Data System (ADS)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  1. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  2. A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2002-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.

  3. A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  4. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  5. Accurate time-of-flight measurement of particle based on ECL-TTL Timer

    NASA Astrophysics Data System (ADS)

    Li, Deping; Liu, Jianguo; Huang, Shuhua; Gui, Huaqiao; Cheng, Yin; Wang, Jie; Lu, Yihuai

    2014-11-01

    Because of its aerodynamic diameter of the aerosol particles are stranded in different parts of different human respiratory system, thus affecting human health. Therefore, how to continue to effectively monitor the aerosol particles become increasingly concerned about. Use flight time of aerosol particle beam spectroscopy of atmospheric aerosol particle size distribution is the typical method for monitoring atmospheric aerosol particle size and particle concentration measurement , and it is the key point to accurate measurement of aerosol particle size spectra that measurement of aerosol particle flight time. In order to achieve accurate measurements of aerosol particles in time-of-flight, this paper design an ECL-TTL high-speed timer with ECL counter and TTL counter. The high-speed timer includes a clock generation, high-speed timer and the control module. Clock Generation Module using a crystal plus multiplier design ideas, take advantage of the stability of the crystal to provide a stable 500MHz clock signal is high counter. High count module design using ECL and TTL counter mix design, timing accuracy while effectively maintaining , expanding the timing range, and simplifies circuit design . High-speed counter control module controls high-speed counter start, stop and reset timely based on aerosol particles time-of-flight, is a key part of the high-speed counting. The high-speed counting resolution of 4ns, the full scale of 4096ns, has been successfully applied Aerodynamic Particle Sizer, to meet the precise measurement of aerosol particles time-of-flight.

  6. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  7. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  8. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  9. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  10. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  11. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun D.; Liu, Nan-Suey

    1993-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  12. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics.

    PubMed

    Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R

    2011-08-15

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.

  13. An accurate assay for HCV based on real-time fluorescence detection of isothermal RNA amplification.

    PubMed

    Wu, Xuping; Wang, Jianfang; Song, Jinyun; Li, Jiayan; Yang, Yongfeng

    2016-09-01

    Hepatitis C virus (HCV) is one of the common reasons of liver fibrosis and hepatocellular carcinoma (HCC). Early, rapid and accurate HCV RNA detection is important to prevent and control liver disease. A simultaneous amplification and testing (SAT) assay, which is based on isothermal amplification of RNA and real-time fluorescence detection, was designed to optimize routine HCV RNA detection. In this study, HCV RNA and an internal control (IC) were amplified and analyzed simultaneously by SAT assay and detection of fluorescence using routine real-time PCR equipment. The assay detected as few as 10 copies of HCV RNA transcripts. We tested 705 serum samples with SAT, among which 96.4% (680/705) showed consistent results compared with routine real-time PCR. About 92% (23/25) discordant samples were confirmed to be same results as SAT-HCV by using a second real-time PCR. The sensitivity and specificity of SAT-HCV assay were 99.6% (461/463) and 100% (242/242), respectively. In conclusion, the SAT assay is an accurate test with a high specificity and sensitivity which may increase the detection rate of HCV. It is therefore a promising tool to diagnose HCV infection. PMID:27283884

  14. An accurate assay for HCV based on real-time fluorescence detection of isothermal RNA amplification.

    PubMed

    Wu, Xuping; Wang, Jianfang; Song, Jinyun; Li, Jiayan; Yang, Yongfeng

    2016-09-01

    Hepatitis C virus (HCV) is one of the common reasons of liver fibrosis and hepatocellular carcinoma (HCC). Early, rapid and accurate HCV RNA detection is important to prevent and control liver disease. A simultaneous amplification and testing (SAT) assay, which is based on isothermal amplification of RNA and real-time fluorescence detection, was designed to optimize routine HCV RNA detection. In this study, HCV RNA and an internal control (IC) were amplified and analyzed simultaneously by SAT assay and detection of fluorescence using routine real-time PCR equipment. The assay detected as few as 10 copies of HCV RNA transcripts. We tested 705 serum samples with SAT, among which 96.4% (680/705) showed consistent results compared with routine real-time PCR. About 92% (23/25) discordant samples were confirmed to be same results as SAT-HCV by using a second real-time PCR. The sensitivity and specificity of SAT-HCV assay were 99.6% (461/463) and 100% (242/242), respectively. In conclusion, the SAT assay is an accurate test with a high specificity and sensitivity which may increase the detection rate of HCV. It is therefore a promising tool to diagnose HCV infection.

  15. Accurate quantification of two key time points used in the determination of hydroxyl polyaluminum species by ferron timed spectrophotometry.

    PubMed

    Zhang, Jing; Yong, Xiaojing; Zhao, Dongyan; Shi, Qiuyi

    2015-01-01

    The content of mononuclear Al (Ala%) changed with its determination time (ta) under different dosages of Ferron (7-iodo-8-hydroxyquinoline-5-sulfonic acid, [Ferron]), and the change of Ala% with [Ferron] at different ta was systematically investigated for the first time. Thus, the most appropriate ta was found with the optimal [Ferron]. Also, the judgment of the platform (flat or level portion) of the complete reaction on the absorption-time curve determined in the hydroxyl polyaluminum solution by Ferron timed spectrophotometry (Ferron assay) was first digitized. The time point (tb) of complete reaction between the medium polyaluminum (Alb) and Ferron reagent depended on the reaction extent, and time could not be used only to judge. Thus, the tb was accurately determined and reduced to half of original, which improved the experiment efficiency significantly. The Ferron assay was completely optimized.

  16. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  17. Time resolved diffuse optical spectroscopy with geometrically accurate models for bulk parameter recovery

    PubMed Central

    Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid

    2016-01-01

    A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137

  18. Nonlinear Aeroelastic Analysis Using a Time-Accurate Navier-Stokes Equations Solver

    NASA Technical Reports Server (NTRS)

    Kuruvila, Geojoe; Bartels, Robert E.; Hong, Moeljo S.; Bhatia, G.

    2007-01-01

    A method to simulate limit cycle oscillation (LCO) due to control surface freeplay using a modified CFL3D, a time-accurate Navier-Stokes computational fluid dynamics (CFD) analysis code with structural modeling capability, is presented. This approach can be used to analyze aeroelastic response of aircraft with structural behavior characterized by nonlinearity in the force verses displacement curve. A limited validation of the method, using very low Mach number experimental data for a three-degrees-of-freedom (pitch/plunge/flap deflection) airfoil model with flap freeplay, is also presented.

  19. Time resolved diffuse optical spectroscopy with geometrically accurate models for bulk parameter recovery

    PubMed Central

    Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid

    2016-01-01

    A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.

  20. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  1. Generation of accurate integral surfaces in time-dependent vector fields.

    PubMed

    Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I

    2008-01-01

    We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990

  2. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  3. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    PubMed

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  4. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  5. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  6. A time-accurate implicit method for chemically reacting flows at all Mach numbers

    NASA Technical Reports Server (NTRS)

    Withington, J. P.; Yang, V.; Shuen, J. S.

    1991-01-01

    The objective of this work is to develop a unified solution algorithm capable of treating time-accurate chemically reacting flows at all Mach numbers, ranging from molecular diffusion velocities to supersonic speeds. A rescaled pressure term is used in the momentum equation to circumvent the singular behavior of pressure at low Mach numbers. A dual time-stepping integration procedure is established. The system eigenvalues become well behaved and have the same order of magnitude, even in the very low Mach number regime. The computational efficiency for moderate and high speed flow is competitive with the conventional density-based scheme. The capabilities of the algorithm are demonstrated by applying it to selected model problems including nozzle flows and flame dynamics.

  7. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  8. In-band asymmetry compensation for accurate time/phase transport over optical transport network.

    PubMed

    Siu, Sammy; Tseng, Wen-Hung; Hu, Hsiu-fang; Lin, Shinn-Yan; Liao, Chia-Shu; Lai, Yi-Liang

    2014-01-01

    The demands of precise time/phase synchronization have been increasing recently due to the next generation of telecommunication synchronization. This paper studies the issues that are relevant to distributing accurate time/phase over optical transport network (OTN). Each node and link can introduce asymmetry, which affects the adequate time/phase accuracy over the networks. In order to achieve better accuracy, protocol level full timing support is used (e.g., Telecom-Boundary clock). Due to chromatic dispersion, the use of different wavelengths consequently causes fiber link delay asymmetry. The analytical result indicates that it introduces significant time error (i.e., phase offset) within 0.3397 ns/km in C-band or 0.3943 ns/km in L-band depending on the wavelength spacing. With the proposed scheme in this paper, the fiber link delay asymmetry can be compensated relying on the estimated mean fiber link delay by the Telecom-Boundary clock, while the OTN control plane is responsible for processing the fiber link delay asymmetry to determine the asymmetry compensation in the timing chain.

  9. Retention Projection Enables Accurate Calculation of Liquid Chromatographic Retention Times Across Labs and Methods

    PubMed Central

    Abate-Pella, Daniel; Freund, Dana M.; Ma, Yan; Simón-Manso, Yamil; Hollender, Juliane; Broeckling, Corey D.; Huhman, David V.; Krokhin, Oleg V.; Stoll, Dwight R.; Hegeman, Adrian D.; Kind, Tobias; Fiehn, Oliver; Schymanski, Emma L.; Prenni, Jessica E.; Sumner, Lloyd W.; Boswell, Paul G.

    2015-01-01

    Identification of small molecules by liquid chromatography-mass spectrometry (LC-MS) can be greatly improved if the chromatographic retention information is used along with mass spectral information to narrow down the lists of candidates. Linear retention indexing remains the standard for sharing retention data across labs, but it is unreliable because it cannot properly account for differences in the experimental conditions used by various labs, even when the differences are relatively small and unintentional. On the other hand, an approach called “retention projection” properly accounts for many intentional differences in experimental conditions, and when combined with a “back-calculation” methodology described recently, it also accounts for unintentional differences. In this study, the accuracy of this methodology is compared with linear retention indexing across eight different labs. When each lab ran a test mixture under a range of multi-segment gradients and flow rates they selected independently, retention projections averaged 22-fold more accurate for uncharged compounds because they properly accounted for these intentional differences, which were more pronounced in steep gradients. When each lab ran the test mixture under nominally the same conditions, which is the ideal situation to reproduce linear retention indices, retention projections still averaged 2-fold more accurate because they properly accounted for many unintentional differences between the LC systems. To the best of our knowledge, this is the most successful study to date aiming to calculate (or even just to reproduce) LC gradient retention across labs, and it is the only study in which retention was reliably calculated under various multi-segment gradients and flow rates chosen independently by labs. PMID:26292625

  10. Rec-DCM-Eigen: Reconstructing a Less Parsimonious but More Accurate Tree in Shorter Time

    PubMed Central

    Kang, Seunghwa; Tang, Jijun; Schaeffer, Stephen W.; Bader, David A.

    2011-01-01

    Maximum parsimony (MP) methods aim to reconstruct the phylogeny of extant species by finding the most parsimonious evolutionary scenario using the species' genome data. MP methods are considered to be accurate, but they are also computationally expensive especially for a large number of species. Several disk-covering methods (DCMs), which decompose the input species to multiple overlapping subgroups (or disks), have been proposed to solve the problem in a divide-and-conquer way. We design a new DCM based on the spectral method and also develop the COGNAC (Comparing Orders of Genes using Novel Algorithms and high-performance Computers) software package. COGNAC uses the new DCM to reduce the phylogenetic tree search space and selects an output tree from the reduced search space based on the MP principle. We test the new DCM using gene order data and inversion distance. The new DCM not only reduces the number of candidate tree topologies but also excludes erroneous tree topologies which can be selected by original MP methods. Initial labeling of internal genomes affects the accuracy of MP methods using gene order data, and the new DCM enables more accurate initial labeling as well. COGNAC demonstrates superior accuracy as a consequence. We compare COGNAC with FastME and the combination of the state of the art DCM (Rec-I-DCM3) and GRAPPA . COGNAC clearly outperforms FastME in accuracy. COGNAC –using the new DCM–also reconstructs a much more accurate tree in significantly shorter time than GRAPPA with Rec-I-DCM3. PMID:21887219

  11. Rec-DCM-Eigen: reconstructing a less parsimonious but more accurate tree in shorter time.

    PubMed

    Kang, Seunghwa; Tang, Jijun; Schaeffer, Stephen W; Bader, David A

    2011-01-01

    Maximum parsimony (MP) methods aim to reconstruct the phylogeny of extant species by finding the most parsimonious evolutionary scenario using the species' genome data. MP methods are considered to be accurate, but they are also computationally expensive especially for a large number of species. Several disk-covering methods (DCMs), which decompose the input species to multiple overlapping subgroups (or disks), have been proposed to solve the problem in a divide-and-conquer way. We design a new DCM based on the spectral method and also develop the COGNAC (Comparing Orders of Genes using Novel Algorithms and high-performance Computers) software package. COGNAC uses the new DCM to reduce the phylogenetic tree search space and selects an output tree from the reduced search space based on the MP principle. We test the new DCM using gene order data and inversion distance. The new DCM not only reduces the number of candidate tree topologies but also excludes erroneous tree topologies which can be selected by original MP methods. Initial labeling of internal genomes affects the accuracy of MP methods using gene order data, and the new DCM enables more accurate initial labeling as well. COGNAC demonstrates superior accuracy as a consequence. We compare COGNAC with FastME and the combination of the state of the art DCM (Rec-I-DCM3) and GRAPPA. COGNAC clearly outperforms FastME in accuracy. COGNAC--using the new DCM--also reconstructs a much more accurate tree in significantly shorter time than GRAPPA with Rec-I-DCM3.

  12. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    PubMed

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg(-1), which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  13. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  14. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  15. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  16. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary determines that...

  17. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  18. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary determines that...

  19. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  20. [Fast and accurate extraction of ring-down time in cavity ring-down spectroscopy].

    PubMed

    Wang, Dan; Hu, Ren-Zhi; Xie, Pin-Hua; Qin, Min; Ling, Liu-Yi; Duan, Jun

    2014-10-01

    Research is conducted to accurate and efficient algorithms for extracting ring-down time (r) in cavity ring-down spectroscopy (CRDS) which is used to measure NO3 radical in the atmosphere. Fast and accurate extraction of ring-down time guarantees more precise and higher speed of measurement. In this research, five kinds of commonly used algorithms are selected to extract ring-down time which respectively are fast Fourier transform (FFT) algorithm, discrete Fourier transform (DFT) algorithm, linear regression of the sum (LRS) algorithm, Levenberg-Marquardt (LM) algorithm and least squares (LS) algorithm. Simulated ring-down signals with various amplitude levels of white noises are fitted by using five kinds of the above-mentioned algorithms, and comparison and analysis is conducted to the fitting results of five kinds of algorithms from four respects: the vulnerability to noises, the accuracy and precision of the fitting, the speed of the fitting and preferable fitting ring-down signal waveform length The research results show that Levenberg-Marquardt algorithm and linear regression of the sum algorithm are able to provide more precise results and prove to have higher noises immunity, and by comparison, the fitting speed of Leven- berg-Marquardt algorithm turns out to be slower. In addition, by analysis of simulated ring-down signals, five to ten times of ring-down time is selected to be the best fitting waveform length because in this case, standard deviation of fitting results of five kinds of algorithms proves to be the minimum. External modulation diode laser and cavity which consists of two high reflectivity mirrors are used to construct a cavity ring-down spectroscopy detection system. According to our experimental conditions, in which the noise level is 0.2%, linear regression of the sum algorithm and Levenberg-Marquardt algorithm are selected to process experimental data. The experimental results show that the accuracy and precision of linear regression of

  1. Time-accurate Navier-Stokes computations of classical two-dimensional edge tone flow fields

    NASA Technical Reports Server (NTRS)

    Liu, B. L.; O'Farrell, J. M.; Jones, Jess H.

    1990-01-01

    Time-accurate Navier-Stokes computations were performed to study a Class II (acoustic) whistle, the edge tone, and gain knowledge of the vortex-acoustic coupling mechanisms driving production of these tones. Results were obtained by solving the full Navier-Stokes equations for laminar compressible air flow of a two-dimensional jet issuing from a slit interacting with a wedge. Cases considered were determined by varying the distance from the slit to the edge. Flow speed was kept constant at 1750 cm/sec as was the slit thickness of 0.1 cm, corresponding to conditions in the experiments of Brown. Excellent agreement was obtained in all four edge tone stage cases between the present computational results and the experimentally obtained results of Brown. Specific edge tone generated phenomena and further confirmation of certain theories concerning these phenomena were brought to light in this analytical simulation of edge tones.

  2. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  3. Accurate mass - time tag library for LC/MS-based metabolite profiling of medicinal plants

    PubMed Central

    Cuthbertson, Daniel J.; Johnson, Sean R.; Piljac-Žegarac, Jasenka; Kappel, Julia; Schäfer, Sarah; Wüst, Matthias; Ketchum, Raymond E. B.; Croteau, Rodney B.; Marques, Joaquim V.; Davin, Laurence B.; Lewis, Norman G.; Rolf, Megan; Kutchan, Toni M.; Soejarto, D. Doel; Lange, B. Markus

    2013-01-01

    We report the development and testing of an accurate mass – time (AMT) tag approach for the LC/MS-based identification of plant natural products (PNPs) in complex extracts. An AMT tag library was developed for approximately 500 PNPs with diverse chemical structures, detected in electrospray and atmospheric pressure chemical ionization modes (both positive and negative polarities). In addition, to enable peak annotations with high confidence, MS/MS spectra were acquired with three different fragmentation energies. The LC/MS and MS/MS data sets were integrated into online spectral search tools and repositories (Spektraris and MassBank), thus allowing users to interrogate their own data sets for the potential presence of PNPs. The utility of the AMT tag library approach is demonstrated by the detection and annotation of active principles in 27 different medicinal plant species with diverse chemical constituents. PMID:23597491

  4. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  5. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  6. Using Focused Regression for Accurate Time-Constrained Scaling of Scientific Applications

    SciTech Connect

    Barnes, B; Garren, J; Lowenthal, D; Reeves, J; de Supinski, B; Schulz, M; Rountree, B

    2010-01-28

    Many large-scale clusters now have hundreds of thousands of processors, and processor counts will be over one million within a few years. Computational scientists must scale their applications to exploit these new clusters. Time-constrained scaling, which is often used, tries to hold total execution time constant while increasing the problem size along with the processor count. However, complex interactions between parameters, the processor count, and execution time complicate determining the input parameters that achieve this goal. In this paper we develop a novel gray-box, focused median prediction errors are less than 13%. regression-based approach that assists the computational scientist with maintaining constant run time on increasing processor counts. Combining application-level information from a small set of training runs, our approach allows prediction of the input parameters that result in similar per-processor execution time at larger scales. Our experimental validation across seven applications showed that median prediction errors are less than 13%.

  7. Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.

    2015-12-01

    Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.

  8. Multiclass semi-volatile compounds determination in wine by gas chromatography accurate time-of-flight mass spectrometry.

    PubMed

    Rodríguez-Cabo, T; Rodríguez, I; Ramil, M; Silva, A; Cela, R

    2016-04-15

    The performance of gas chromatography (GC) with accurate, high resolution mass spectrometry (HRMS) for the determination of a group of 39 semi-volatile compounds related to wine quality (pesticide residues, phenolic off-flavours, phenolic pollutants and bioactive stilbenes) is investigated. Solid-phase extraction (SPE) was used as extraction technique, previously to acetylation (phenolic compounds) and dispersive liquid-liquid microextraction (DLLME) concentration. Compounds were determined by GC coupled to a quadrupole time-of-flight (QTOF) MS system through an electron ionization (EI) source. The final method attained limits of quantification (LOQs) at the very low ng mL(-1) level, covering the range of expected concentrations for target compounds in red and white wines. For 38 out of 39 compounds, performance of sample preparation and determination steps were hardly affected by the wine matrix; thus, accurate recoveries were achieved by using pseudo-external calibration. Levels of target compounds in a set of 25 wine samples are reported. The capabilities of the described approach for the post-run identification of species not considered during method development, without retention time information, are illustrated and discussed with selected examples of compounds from different classes.

  9. Life, Information, Entropy, and Time

    PubMed Central

    Crofts, Antony R.

    2008-01-01

    Attempts to understand how information content can be included in an accounting of the energy flux of the biosphere have led to the conclusion that, in information transmission, one component, the semantic content, or “the meaning of the message,” adds no thermodynamic burden over and above costs arising from coding, transmission and translation. In biology, semantic content has two major roles. For all life forms, the message of the genotype encoded in DNA specifies the phenotype, and hence the organism that is tested against the real world through the mechanisms of Darwinian evolution. For human beings, communication through language and similar abstractions provides an additional supra-phenotypic vehicle for semantic inheritance, which supports the cultural heritages around which civilizations revolve. The following three postulates provide the basis for discussion of a number of themes that demonstrate some important consequences. (i) Information transmission through either pathway has thermodynamic components associated with data storage and transmission. (ii) The semantic content adds no additional thermodynamic cost. (iii) For all semantic exchange, meaning is accessible only through translation and interpretation, and has a value only in context. (1) For both pathways of semantic inheritance, translational and copying machineries are imperfect. As a consequence both pathways are subject to mutation and to evolutionary pressure by selection. Recognition of semantic content as a common component allows an understanding of the relationship between genes and memes, and a reformulation of Universal Darwinism. (2) The emergent properties of life are dependent on a processing of semantic content. The translational steps allow amplification in complexity through combinatorial possibilities in space and time. Amplification depends on the increased potential for complexity opened by 3D interaction specificity of proteins, and on the selection of useful variants

  10. Life, Information, Entropy, and Time

    PubMed Central

    Crofts, Antony R.

    2008-01-01

    Attempts to understand how information content can be included in an accounting of the energy flux of the biosphere have led to the conclusion that, in information transmission, one component, the semantic content, or “the meaning of the message,” adds no thermodynamic burden over and above costs arising from coding, transmission and translation. In biology, semantic content has two major roles. For all life forms, the message of the genotype encoded in DNA specifies the phenotype, and hence the organism that is tested against the real world through the mechanisms of Darwinian evolution. For human beings, communication through language and similar abstractions provides an additional supra-phenotypic vehicle for semantic inheritance, which supports the cultural heritages around which civilizations revolve. The following three postulates provide the basis for discussion of a number of themes that demonstrate some important consequences. (i) Information transmission through either pathway has thermodynamic components associated with data storage and transmission. (ii) The semantic content adds no additional thermodynamic cost. (iii) For all semantic exchange, meaning is accessible only through translation and interpretation, and has a value only in context. (1) For both pathways of semantic inheritance, translational and copying machineries are imperfect. As a consequence both pathways are subject to mutation and to evolutionary pressure by selection. Recognition of semantic content as a common component allows an understanding of the relationship between genes and memes, and a reformulation of Universal Darwinism. (2) The emergent properties of life are dependent on a processing of semantic content. The translational steps allow amplification in complexity through combinatorial possibilities in space and time. Amplification depends on the increased potential for complexity opened by 3D interaction specificity of proteins, and on the selection of useful variants

  11. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  12. Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.

    PubMed

    Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki

    2012-12-01

    Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance.

  13. Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes

    PubMed Central

    Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  14. Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.

    PubMed

    Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  15. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks.

    PubMed

    Zhang, Xiujun; Zhao, Juan; Hao, Jin-Kao; Zhao, Xing-Ming; Chen, Luonan

    2015-03-11

    Mutual information (MI), a quantity describing the nonlinear dependence between two random variables, has been widely used to construct gene regulatory networks (GRNs). Despite its good performance, MI cannot separate the direct regulations from indirect ones among genes. Although the conditional mutual information (CMI) is able to identify the direct regulations, it generally underestimates the regulation strength, i.e. it may result in false negatives when inferring gene regulations. In this work, to overcome the problems, we propose a novel concept, namely conditional mutual inclusive information (CMI2), to describe the regulations between genes. Furthermore, with CMI2, we develop a new approach, namely CMI2NI (CMI2-based network inference), for reverse-engineering GRNs. In CMI2NI, CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback-Leibler divergence between the postulated distributions of including and excluding the edge between the two genes. The benchmark results on the GRNs from DREAM challenge as well as the SOS DNA repair network in Escherichia coli demonstrate the superior performance of CMI2NI. Specifically, even for gene expression data with small sample size, CMI2NI can not only infer the correct topology of the regulation networks but also accurately quantify the regulation strength between genes. As a case study, CMI2NI was also used to reconstruct cancer-specific GRNs using gene expression data from The Cancer Genome Atlas (TCGA). CMI2NI is freely accessible at http://www.comp-sysbio.org/cmi2ni.

  16. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  17. Invited article: Time accurate mass flow measurements of solid-fueled systems.

    PubMed

    Olliges, Jordan D; Lilly, Taylor C; Joslyn, Thomas B; Ketsdever, Andrew D

    2008-10-01

    A novel diagnostic method is described that utilizes a thrust stand mass balance (TSMB) to directly measure time-accurate mass flow from a solid-fuel thruster. The accuracy of the TSMB mass flow measurement technique was demonstrated in three ways including the use of an idealized numerical simulation, verifying a fluid mass calibration with high-speed digital photography, and by measuring mass loss in more than 30 hybrid rocket motor firings. Dynamic response of the mass balance was assessed through weight calibration and used to derive spring, damping, and mass moment of inertia coefficients for the TSMB. These dynamic coefficients were used to determine the mass flow rate and total mass loss within an acrylic and gaseous oxygen hybrid rocket motor firing. Intentional variations in the oxygen flow rate resulted in corresponding variations in the total propellant mass flow as expected. The TSMB was optimized to determine mass losses of up to 2.5 g and measured total mass loss to within 2.5% of that calculated by a NIST-calibrated digital scale. Using this method, a mass flow resolution of 0.0011 g/s or 2% of the average mass flow in this study has been achieved.

  18. A Statistical Method for Assessing Peptide Identification Confidence in Accurate Mass and Time Tag Proteomics

    SciTech Connect

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-07-15

    High-throughput proteomics is rapidly evolving to require high mass measurement accuracy for a variety of different applications. Increased mass measurement accuracy in bottom-up proteomics specifically allows for an improved ability to distinguish and characterize detected MS features, which may in turn be identified by, e.g., matching to entries in a database for both precursor and fragmentation mass identification methods. Many tools exist with which to score the identification of peptides from LC-MS/MS measurements or to assess matches to an accurate mass and time (AMT) tag database, but these two calculations remain distinctly unrelated. Here we present a statistical method, Statistical Tools for AMT tag Confidence (STAC), which extends our previous work incorporating prior probabilities of correct sequence identification from LC-MS/MS, as well as the quality with which LC-MS features match AMT tags, to evaluate peptide identification confidence. Compared to existing tools, we are able to obtain significantly more high-confidence peptide identifications at a given false discovery rate and additionally assign confidence estimates to individual peptide identifications. Freely available software implementations of STAC are available in both command line and as a Windows graphical application.

  19. Accurate non-adiabatic quantum dynamics from pseudospectral sampling of time-dependent Gaussian basis sets

    NASA Astrophysics Data System (ADS)

    Heaps, Charles W.; Mazziotti, David A.

    2016-08-01

    Quantum molecular dynamics requires an accurate representation of the molecular potential energy surface from a minimal number of electronic structure calculations, particularly for nonadiabatic dynamics where excited states are required. In this paper, we employ pseudospectral sampling of time-dependent Gaussian basis functions for the simulation of non-adiabatic dynamics. Unlike other methods, the pseudospectral Gaussian molecular dynamics tests the Schrödinger equation with N Dirac delta functions located at the centers of the Gaussian functions reducing the scaling of potential energy evaluations from O ( N 2 ) to O ( N ) . By projecting the Gaussian basis onto discrete points in space, the method is capable of efficiently and quantitatively describing the nonadiabatic population transfer and intra-surface quantum coherence. We investigate three model systems: the photodissociation of three coupled Morse oscillators, the bound state dynamics of two coupled Morse oscillators, and a two-dimensional model for collinear triatomic vibrational dynamics. In all cases, the pseudospectral Gaussian method is in quantitative agreement with numerically exact calculations. The results are promising for nonadiabatic molecular dynamics in molecular systems where strongly correlated ground or excited states require expensive electronic structure calculations.

  20. Accuracy in Recalling Interest Inventory Information at Three Time Intervals

    ERIC Educational Resources Information Center

    Swanson, Jane L.; Gore, Paul A., Jr.; Leuwerke, Wade; D'Achiardi, Catalina; Edwards, Jorie Hitch; Edwards, Jared

    2006-01-01

    Rates of accurate recall of the Strong Interest Inventory (SII; L. W. Harmon, J. C. Hansen, F. H. Borgen, & A. L. Hammer, 1994) profile information varied with the amount of time elapsed since the interpretation, the type of SII scale, and whether immediate recall was elicited, but rates did not vary with the strategy used to provide the…

  1. PACE: Pattern Accurate Computationally Efficient Bootstrapping for Timely Discovery of Cyber-Security Concepts

    SciTech Connect

    McNeil, Nikki C; Bridges, Robert A; Iannacone, Michael D; Czejdo, Bogdan; Perez, Nicolas E; Goodall, John R

    2013-01-01

    Public disclosure of important security information, such as knowledge of vulnerabilities or exploits, often occurs in blogs, tweets, mailing lists, and other online sources significantly before proper classification into structured databases. In order to facilitate timely discovery of such knowledge, we propose a novel semi-supervised learning algorithm, PACE, for identifying and classifying relevant entities in text sources. The main contribution of this paper is an enhancement of the traditional bootstrapping method for entity extraction by employing a time-memory trade-off that simultaneously circumvents a costly corpus search while strengthening pattern nomination, which should increase accuracy. An implementation in the cyber-security domain is discussed as well as challenges to Natural Language Processing imposed by the security domain.

  2. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  3. Providing accurate near real-time fire alerts for Protected Areas through NASA FIRMS: Opportunities and Challenges

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Wong, M.; Murphy, K. J.

    2013-12-01

    The NASA Fire Information for Resource Management System (FIRMS) is at the forefront of providing global near real-time (NRT) MODIS thermal anomalies / hotspot location data to end-users . FIRMS serves the data via an interactive Web GIS named Web Fire Mapper, downloads of NRT active fire, archive data downloads for MODIS hotspots dating back to 1999 and a hotspot email alert system The FIRMS Email Alerts system has been successfully alerting users of fires in their area of interest in near real-time and/or via daily and weekly email summaries, with an option to receive MODIS hotspot data as a text file (CSV) attachment. Currently, there are more than 7000 email alert subscriptions from more than 100 countries. Specifically, the email alerts system is designed to generate and send an email alert for any region or area on the globe, with a special focus on providing alerts for protected areas worldwide. For many protected areas, email alerts are particularly useful for early fire detection, monitoring on going fires, as well as allocating resources to protect wildlife and natural resources of particular value. For protected areas, FIRMS uses the World Database on Protected Areas (WDPA) supplied by United Nations Environment Program - World Conservation Monitoring Centre (UNEP-WCMC). Maintaining the most up-to-date, accurate boundary geometry for the protected areas for the email alerts is a challenge as the WDPA is continuously updated due to changing boundaries, merging or delisting of certain protected areas. Because of this dynamic nature of the protected areas database, the FIRMS protected areas database is frequently out-of-date with the most current version of WDPA database. To maintain the most up-to-date boundary information for protected areas and to be in compliance with the WDPA terms and conditions, FIRMS needs to constantly update its database of protected areas. Currently, FIRMS strives to keep its database up to date by downloading the most recent

  4. TWO ACCURATE TIME-DELAY DISTANCES FROM STRONG LENSING: IMPLICATIONS FOR COSMOLOGY

    SciTech Connect

    Suyu, S. H.; Treu, T.; Auger, M. W.; Hilbert, S.; Blandford, R. D.; Marshall, P. J.; Tewes, M.; Courbin, F.; Meylan, G.; Fassnacht, C. D.; Koopmans, L. V. E.; Sluse, D.

    2013-04-01

    Strong gravitational lenses with measured time delays between the multiple images and models of the lens mass distribution allow a one-step determination of the time-delay distance, and thus a measure of cosmological parameters. We present a blind analysis of the gravitational lens RXJ1131-1231 incorporating (1) the newly measured time delays from COSMOGRAIL, the COSmological MOnitoring of GRAvItational Lenses, (2) archival Hubble Space Telescope imaging of the lens system, (3) a new velocity-dispersion measurement of the lens galaxy of 323 {+-} 20 km s{sup -1} based on Keck spectroscopy, and (4) a characterization of the line-of-sight structures via observations of the lens' environment and ray tracing through the Millennium Simulation. Our blind analysis is designed to prevent experimenter bias. The joint analysis of the data sets allows a time-delay distance measurement to 6% precision that takes into account all known systematic uncertainties. In combination with the Wilkinson Microwave Anisotropy Probe seven-year (WMAP7) data set in flat wCDM cosmology, our unblinded cosmological constraints for RXJ1131-1231 are H{sub 0}=80.0{sup +5.8}{sub -5.7} km s{sup -1} Mpc{sup -1}, {Omega}{sub de} = 0.79 {+-} 0.03, and w=-1.25{sup +0.17}{sub -0.21}. We find the results to be statistically consistent with those from the analysis of the gravitational lens B1608+656, permitting us to combine the inferences from these two lenses. The joint constraints from the two lenses and WMAP7 are H{sub 0}=75.2{sup +4.4}{sub -4.2} km s{sup -1} Mpc{sup -1}, {Omega}{sub de}=0.76{sup +0.02}{sub -0.03}, and w = -1.14{sup +0.17}{sub -0.20} in flat wCDM, and H{sub 0}=73.1{sup +2.4}{sub -3.6} km s{sup -1} Mpc{sup -1}, {Omega}{sub {Lambda}}=0.75{sup +0.01}{sub -0.02}, and {Omega}{sub k}=0.003{sup +0.005}{sub -0.006} in open {Lambda}CDM. Time-delay lenses constrain especially tightly the Hubble constant H{sub 0} (5.7% and 4.0% respectively in wCDM and open {Lambda}CDM) and curvature of the

  5. Honey bees can perform accurately directed waggle dances based solely on information from a homeward trip.

    PubMed

    Edrich, Wolfgang

    2015-10-01

    Honey bees were displaced several 100 m from their hive to an unfamiliar site and provisioned with honey. After feeding, almost two-thirds of the bees flew home to their hive within a 50 min observation time. About half of these returning, bees signalled the direction of the release site in waggle dances thus demonstrating that the dance can be guided entirely by information gathered on a single homeward trip. The likely reason for the bees' enthusiastic dancing on their initial return from this new site was the highly rewarding honeycomb that they were given there. The attractive nature of the site is confirmed by many of these bees revisiting the site and continuing to forage there.

  6. Information-time based futures pricing

    NASA Astrophysics Data System (ADS)

    Yen, Simon; Wang, Jai Jen

    2009-09-01

    This study follows Clark [P.K. Clark, A subordinated stochastic process model with finite variance for speculative prices, Econometrica 41 (1973) 135-155] and Chang, Chang and Lim [C.W. Chang, S.K. Chang, K.G. Lim, Information-time option pricing: Theory and empirical evidence, Journal of Financial Economics 48 (1998) 211-242] to subordinate an information-time based directing process into calendar-time based parent processes. A closed-form futures pricing formula is derived after taking into account the information-time setting and the stochasticity of the spot price, interest rate, and convenience yield. According to the empirical results on the TAIEX and TFETX data from 1998/7/21 to 2003/12/31, the information-time based model performs better than its calendar-time based counterpart and the cost of carry model, especially when the information arrival intensity estimates become larger.

  7. Quantitative real-time PCR for rapid and accurate titration of recombinant baculovirus particles.

    PubMed

    Hitchman, Richard B; Siaterli, Evangelia A; Nixon, Clare P; King, Linda A

    2007-03-01

    We describe the use of quantitative PCR (QPCR) to titer recombinant baculoviruses. Custom primers and probe were designed to gp64 and used to calculate a standard curve of QPCR derived titers from dilutions of a previously titrated baculovirus stock. Each dilution was titrated by both plaque assay and QPCR, producing a consistent and reproducible inverse relationship between C(T) and plaque forming units per milliliter. No significant difference was observed between titers produced by QPCR and plaque assay for 12 recombinant viruses, confirming the validity of this technique as a rapid and accurate method of baculovirus titration.

  8. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give

  9. Robust and accurate anomaly detection in ECG artifacts using time series motif discovery.

    PubMed

    Sivaraks, Haemwaan; Ratanamahatana, Chotirat Ann

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods.

  10. Accurate real-time depth control for CP-SSOCT distal sensor based handheld microsurgery tools

    PubMed Central

    Cheon, Gyeong Woo; Huang, Yong; Cha, Jaepyeng; Gehlbach, Peter L.; Kang, Jin U.

    2015-01-01

    This paper presents a novel intuitive targeting and tracking scheme that utilizes a common-path swept source optical coherence tomography (CP-SSOCT) distal sensor integrated handheld microsurgical tool. To achieve micron-order precision control, a reliable and accurate OCT distal sensing method is required; simultaneously, a prediction algorithm is necessary to compensate for the system delay associated with the computational, mechanical and electronic latencies. Due to the multi-layered structure of retina, it is necessary to develop effective surface detection methods rather than simple peak detection. To achieve this, a shifted cross-correlation method is applied for surface detection in order to increase robustness and accuracy in distal sensing. A predictor based on Kalman filter was implemented for more precise motion compensation. The performance was first evaluated using an established dry phantom consisting of stacked cellophane tape. This was followed by evaluation in an ex-vivo bovine retina model to assess system accuracy and precision. The results demonstrate highly accurate depth targeting with less than 5 μm RMSE depth locking. PMID:26137393

  11. Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.

    PubMed

    Barbosa, Marconi; James, Andrew C

    2014-08-01

    A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477

  12. Accurately decoding visual information from fMRI data obtained in a realistic virtual environment

    PubMed Central

    Floren, Andrew; Naylor, Bruce; Miikkulainen, Risto; Ress, David

    2015-01-01

    Three-dimensional interactive virtual environments (VEs) are a powerful tool for brain-imaging based cognitive neuroscience that are presently under-utilized. This paper presents machine-learning based methods for identifying brain states induced by realistic VEs with improved accuracy as well as the capability for mapping their spatial topography on the neocortex. VEs provide the ability to study the brain under conditions closer to the environment in which humans evolved, and thus to probe deeper into the complexities of human cognition. As a test case, we designed a stimulus to reflect a military combat situation in the Middle East, motivated by the potential of using real-time functional magnetic resonance imaging (fMRI) in the treatment of post-traumatic stress disorder. Each subject experienced moving through the virtual town where they encountered 1–6 animated combatants at different locations, while fMRI data was collected. To analyze the data from what is, compared to most studies, more complex and less controlled stimuli, we employed statistical machine learning in the form of Multi-Voxel Pattern Analysis (MVPA) with special attention given to artificial Neural Networks (NN). Extensions to NN that exploit the block structure of the stimulus were developed to improve the accuracy of the classification, achieving performances from 58 to 93% (chance was 16.7%) with six subjects. This demonstrates that MVPA can decode a complex cognitive state, viewing a number of characters, in a dynamic virtual environment. To better understand the source of this information in the brain, a novel form of sensitivity analysis was developed to use NN to quantify the degree to which each voxel contributed to classification. Compared with maps produced by general linear models and the searchlight approach, these sensitivity maps revealed a more diverse pattern of information relevant to the classification of cognitive state. PMID:26106315

  13. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  14. Species Distribution 2.0: An Accurate Time- and Cost-Effective Method of Prospection Using Street View Imagery

    PubMed Central

    Schwoertzig, Eugénie; Millon, Alexandre

    2016-01-01

    Species occurrence data provide crucial information for biodiversity studies in the current context of global environmental changes. Such studies often rely on a limited number of occurrence data collected in the field and on pseudo-absences arbitrarily chosen within the study area, which reduces the value of these studies. To overcome this issue, we propose an alternative method of prospection using geo-located street view imagery (SVI). Following a standardised protocol of virtual prospection using both vertical (aerial photographs) and horizontal (SVI) perceptions, we have surveyed 1097 randomly selected cells across Spain (0.1x0.1 degree, i.e. 20% of Spain) for the presence of Arundo donax L. (Poaceae). In total we have detected A. donax in 345 cells, thus substantially expanding beyond the now two-centuries-old field-derived record, which described A. donax only 216 cells. Among the field occurrence cells, 81.1% were confirmed by SVI prospection to be consistent with species presence. In addition, we recorded, by SVI prospection, 752 absences, i.e. cells where A. donax was considered absent. We have also compared the outcomes of climatic niche modeling based on SVI data against those based on field data. Using generalized linear models fitted with bioclimatic predictors, we have found SVI data to provide far more compelling results in terms of niche modeling than does field data as classically used in SDM. This original, cost- and time-effective method provides the means to accurately locate highly visible taxa, reinforce absence data, and predict species distribution without long and expensive in situ prospection. At this time, the majority of available SVI data is restricted to human-disturbed environments that have road networks. However, SVI is becoming increasingly available in natural areas, which means the technique has considerable potential to become an important factor in future biodiversity studies. PMID:26751565

  15. TTVFast: An efficient and accurate code for transit timing inversion problems

    SciTech Connect

    Deck, Katherine M.; Agol, Eric; Holman, Matthew J.; Nesvorný, David

    2014-06-01

    Transit timing variations (TTVs) have proven to be a powerful technique for confirming Kepler planet candidates, for detecting non-transiting planets, and for constraining the masses and orbital elements of multi-planet systems. These TTV applications often require the numerical integration of orbits for computation of transit times (as well as impact parameters and durations); frequently tens of millions to billions of simulations are required when running statistical analyses of the planetary system properties. We have created a fast code for transit timing computation, TTVFast, which uses a symplectic integrator with a Keplerian interpolator for the calculation of transit times. The speed comes at the expense of accuracy in the calculated times, but the accuracy lost is largely unnecessary, as transit times do not need to be calculated to accuracies significantly smaller than the measurement uncertainties on the times. The time step can be tuned to give sufficient precision for any particular system. We find a speed-up of at least an order of magnitude relative to dynamical integrations with high precision using a Bulirsch-Stoer integrator.

  16. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  17. Accurate refinement of docked protein complexes using evolutionary information and deep learning.

    PubMed

    Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit

    2016-06-01

    One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures. PMID:26846813

  18. Accurate refinement of docked protein complexes using evolutionary information and deep learning.

    PubMed

    Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit

    2016-06-01

    One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.

  19. Accurate Detection of Interaural Time Differences by a Population of Slowly Integrating Neurons

    NASA Astrophysics Data System (ADS)

    Vasilkov, Viacheslav A.; Tikidji-Hamburyan, Ruben A.

    2012-03-01

    For localization of a sound source, animals and humans process the microsecond interaural time differences of arriving sound waves. How nervous systems, consisting of elements with time constants of about and more than 1 ms, can reach such high precision is still an open question. In this Letter we present a hypothesis and show theoretical and computational evidence that a rather large population of slowly integrating neurons with inhibitory and excitatory inputs (EI neurons) can detect minute temporal disparities in input signals which are significantly less than any time constant in the system.

  20. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    ERIC Educational Resources Information Center

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  1. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  2. Continuous real-time water information: an important Kansas resource

    USGS Publications Warehouse

    Loving, Brian L.; Putnam, James E.; Turk, Donita M.

    2014-01-01

    Continuous real-time information on streams, lakes, and groundwater is an important Kansas resource that can safeguard lives and property, and ensure adequate water resources for a healthy State economy. The U.S. Geological Survey (USGS) operates approximately 230 water-monitoring stations at Kansas streams, lakes, and groundwater sites. Most of these stations are funded cooperatively in partnerships with local, tribal, State, or other Federal agencies. The USGS real-time water-monitoring network provides long-term, accurate, and objective information that meets the needs of many customers. Whether the customer is a water-management or water-quality agency, an emergency planner, a power or navigational official, a farmer, a canoeist, or a fisherman, all can benefit from the continuous real-time water information gathered by the USGS.

  3. Accurate measurement of the sticking time and sticking probability of Rb atoms on a polydimethylsiloxane coating

    SciTech Connect

    Atutov, S. N. Plekhanov, A. I.

    2015-01-15

    We present the results of a systematic study of Knudsen’s flow of Rb atoms in cylindrical capillary cells coated with a polydimethylsiloxane (PDMS) compound. The purpose of the investigation is to determine the characterization of the coating in terms of the sticking probability and sticking time of Rb on the two types of coating of high and medium viscosities. We report the measurement of the sticking probability of a Rb atom to the coating equal to 4.3 × 10{sup −5}, which corresponds to the number of bounces 2.3 × 10{sup 4} at room temperature. These parameters are the same for the two kinds of PDMS used. We find that at room temperature, the respective sticking times for high-viscosity and medium-viscosity PDMS are 22 ± 3 μs and 49 ± 6 μs. These sticking times are about million times larger than the sticking time derived from the surface Rb atom adsorption energy and temperature of the coating. A tentative explanation of this surprising result is proposed based on the bulk diffusion of the atoms that collide with the surface and penetrate inside the coating. The results can be important in many resonance cell experiments, such as the efficient magnetooptical trapping of rare elements or radioactive isotopes and in experiments on the light-induced drift effect.

  4. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  5. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  6. An efficient and accurate approach to MTE-MART for time-resolved tomographic PIV

    NASA Astrophysics Data System (ADS)

    Lynch, K. P.; Scarano, F.

    2015-03-01

    The motion-tracking-enhanced MART (MTE-MART; Novara et al. in Meas Sci Technol 21:035401, 2010) has demonstrated the potential to increase the accuracy of tomographic PIV by the combined use of a short sequence of non-simultaneous recordings. A clear bottleneck of the MTE-MART technique has been its computational cost. For large datasets comprising time-resolved sequences, MTE-MART becomes unaffordable and has been barely applied even for the analysis of densely seeded tomographic PIV datasets. A novel implementation is proposed for tomographic PIV image sequences, which strongly reduces the computational burden of MTE-MART, possibly below that of regular MART. The method is a sequential algorithm that produces a time-marching estimation of the object intensity field based on an enhanced guess, which is built upon the object reconstructed at the previous time instant. As the method becomes effective after a number of snapshots (typically 5-10), the sequential MTE-MART (SMTE) is most suited for time-resolved sequences. The computational cost reduction due to SMTE simply stems from the fewer MART iterations required for each time instant. Moreover, the method yields superior reconstruction quality and higher velocity field measurement precision when compared with both MART and MTE-MART. The working principle is assessed in terms of computational effort, reconstruction quality and velocity field accuracy with both synthetic time-resolved tomographic images of a turbulent boundary layer and two experimental databases documented in the literature. The first is the time-resolved data of flow past an airfoil trailing edge used in the study of Novara and Scarano (Exp Fluids 52:1027-1041, 2012); the second is a swirling jet in a water flow. In both cases, the effective elimination of ghost particles is demonstrated in number and intensity within a short temporal transient of 5-10 frames, depending on the seeding density. The increased value of the velocity space-time

  7. Lognormal infection times of online information spread.

    PubMed

    Doerr, Christian; Blenn, Norbert; Van Mieghem, Piet

    2013-01-01

    The infection times of individuals in online information spread such as the inter-arrival time of Twitter messages or the propagation time of news stories on a social media site can be explained through a convolution of lognormally distributed observation and reaction times of the individual participants. Experimental measurements support the lognormal shape of the individual contributing processes, and have resemblance to previously reported lognormal distributions of human behavior and contagious processes. PMID:23700473

  8. 16 CFR 1101.32 - Reasonable steps to assure information is accurate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to release to the public: (1) The Commission staff or a qualified person or entity outside the... will review the information in light of the comments. The degree of review by the Commission and...

  9. An examination of information quality as a moderator of accurate personality judgment.

    PubMed

    Letzring, Tera D; Human, Lauren J

    2014-10-01

    Information quality is an important moderator of the accuracy of personality judgment, and this article describes research focusing on how specific kinds of information are related to accuracy. In this study, 228 participants (159 female, 69 male; mean age = 23.43; 86.4% Caucasian) in unacquainted dyads were assigned to discuss thoughts and feelings, discuss behaviors, or engage in behaviors. Interactions lasted 25-30 min, and participants provided ratings of their partners and themselves following the interaction on the Big Five traits, ego-control, and ego-resiliency. Next, the amount of different types of information made available by each participant was objectively coded. The accuracy criterion, composed of self- and acquaintance ratings, was used to assess distinctive and normative accuracy using the Social Accuracy Model. Participants in the discussion conditions achieved higher distinctive accuracy than participants who engaged in behaviors, but normative accuracy did not differ across conditions. Information about specific behaviors and general behaviors were among the most consistent predictors of higher distinctive accuracy. Normative accuracy was more likely to decrease than increase when higher-quality information was available. Verbal information about behaviors is the most useful for learning about how people are unique.

  10. Real-time information management environment (RIME)

    NASA Astrophysics Data System (ADS)

    DeCleene, Brian T.; Griffin, Sean; Matchett, Garry; Niejadlik, Richard

    2000-08-01

    Whereas data mining and exploitation improve the quality and quantity of information available to the user, there remains a mission requirement to assist the end-user in managing the access to this information and ensuring that the appropriate information is delivered to the right user in time to make decisions and take action. This paper discusses TASC's federated architecture to next- generation information management, contrasts the approach against emerging technologies, and quantifies the performance gains. This architecture and implementation, known as Real-time Information Management Environment (RIME), is based on two key concepts: information utility and content-based channelization. The introduction of utility allows users to express the importance and delivery requirements of their information needs in the context of their mission. Rather than competing for resources on a first-come/first-served basis, the infrastructure employs these utility functions to dynamically react to unanticipated loading by optimizing the delivered information utility. Furthermore, commander's resource policies shape these functions to ensure that resources are allocated according to military doctrine. Using information about the desired content, channelization identifies opportunities to aggregate users onto shared channels reducing redundant transmissions. Hence, channelization increases the information throughput of the system and balances sender/receiver processing load.

  11. WWVB: A Half Century of Delivering Accurate Frequency and Time by Radio

    PubMed Central

    Lombardi, Michael A; Nelson, Glenn K

    2014-01-01

    In commemoration of its 50th anniversary of broadcasting from Fort Collins, Colorado, this paper provides a history of the National Institute of Standards and Technology (NIST) radio station WWVB. The narrative describes the evolution of the station, from its origins as a source of standard frequency, to its current role as the source of time-of-day synchronization for many millions of radio controlled clocks. PMID:26601026

  12. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  13. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  14. A simple accurate method to predict time of ponding under variable intensity rainfall

    NASA Astrophysics Data System (ADS)

    Assouline, S.; Selker, J. S.; Parlange, J.-Y.

    2007-03-01

    The prediction of the time to ponding following commencement of rainfall is fundamental to hydrologic prediction of flood, erosion, and infiltration. Most of the studies to date have focused on prediction of ponding resulting from simple rainfall patterns. This approach was suitable to rainfall reported as average values over intervals of up to a day but does not take advantage of knowledge of the complex patterns of actual rainfall now commonly recorded electronically. A straightforward approach to include the instantaneous rainfall record in the prediction of ponding time and excess rainfall using only the infiltration capacity curve is presented. This method is tested against a numerical solution of the Richards equation on the basis of an actual rainfall record. The predicted time to ponding showed mean error ≤7% for a broad range of soils, with and without surface sealing. In contrast, the standard predictions had average errors of 87%, and worst-case errors exceeding a factor of 10. In addition to errors intrinsic in the modeling framework itself, errors that arise from averaging actual rainfall records over reporting intervals were evaluated. Averaging actual rainfall records observed in Israel over periods of as little as 5 min significantly reduced predicted runoff (75% for the sealed sandy loam and 46% for the silty clay loam), while hourly averaging gave complete lack of prediction of ponding in some of the cases.

  15. Study of time-accurate integration of the variable-density Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoyi; Pantano, Carlos

    2015-11-01

    We present several theoretical elements that affect time-consistent integration of the low-Mach number approximation of variable-density Navier-Stokes equations. The goal is for velocity, pressure, density, and scalars to achieve uniform order of accuracy, consistent with the time integrator being used. We show examples of second-order (using Crank-Nicolson and Adams-Bashforth) and third-order (using additive semi-implicit Runge-Kutta) uniform convergence with the proposed conceptual framework. Furthermore, the consistent approach can be extended to other time integrators. In addition, the method is formulated using approximate/incomplete factorization methods for easy incorporation in existing solvers. One of the observed benefits of the proposed approach is improved stability, even for large density difference, in comparison with other existing formulations. A linearized stability analysis is also carried out for some test problems to better understand the behavior of the approach. This work was supported in part by the Department of Energy, National Nuclear Security Administration, under award no. DE-NA0002382 and the California Institute of Technology.

  16. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-01

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  17. An adaptive grid method for computing time accurate solutions on structured grids

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Smith, Robert E.; Eiseman, Peter R.

    1991-01-01

    The solution method consists of three parts: a grid movement scheme; an unsteady Euler equation solver; and a temporal coupling routine that links the dynamic grid to the Euler solver. The grid movement scheme is an algebraic method containing grid controls that generate a smooth grid that resolves the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling is performed with a grid prediction correction procedure that is simple to implement and provides a grid that does not lag the solution in time. The adaptive solution method is tested by computing the unsteady inviscid solutions for a one dimensional shock tube and a two dimensional shock vortex iteraction.

  18. Three-dimensional Euler time accurate simulations of fan rotor-stator interactions

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    A numerical method useful to describe unsteady 3-D flow fields within turbomachinery stages is presented. The method solves the compressible, time dependent, Euler conservation equations with a finite volume, flux splitting, total variation diminishing, approximately factored, implicit scheme. Multiblock composite gridding is used to partition the flow field into a specified arrangement of blocks with static and dynamic interfaces. The code is optimized to take full advantage of the processing power and speed of the Cray Y/MP supercomputer. The method is applied to the computation of the flow field within a single stage, axial flow fan, thus reproducing the unsteady 3-D rotor-stator interaction.

  19. Hardware and Software Developments for the Accurate Time-Linked Data Acquisition System

    SciTech Connect

    BERG,DALE E.; RUMSEY,MARK A.; ZAYAS,JOSE R.

    1999-11-09

    Wind-energy researchers at Sandia National Laboratories have developed a new, light-weight, modular data acquisition system capable of acquiring long-term, continuous, multi-channel time-series data from operating wind-turbines. New hardware features have been added to this system to make it more flexible and permit programming via telemetry. User-friendly Windows-based software has been developed for programming the hardware and acquiring, storing, analyzing, and archiving the data. This paper briefly reviews the major components of the system, summarizes the recent hardware enhancements and operating experiences, and discusses the features and capabilities of the software programs that have been developed.

  20. An accurate air temperature measurement system based on an envelope pulsed ultrasonic time-of-flight technique.

    PubMed

    Huang, Y S; Huang, Y P; Huang, K N; Young, M S

    2007-11-01

    A new microcomputer based air temperature measurement system is presented. An accurate temperature measurement is derived from the measurement of sound velocity by using an ultrasonic time-of-flight (TOF) technique. The study proposes a novel algorithm that combines both amplitude modulation (AM) and phase modulation (PM) to get the TOF measurement. The proposed system uses the AM and PM envelope square waveform (APESW) to reduce the error caused by inertia delay. The APESW ultrasonic driving waveform causes an envelope zero and phase inversion phenomenon in the relative waveform of the receiver. To accurately achieve a TOF measurement, the phase inversion phenomenon was used to sufficiently identify the measurement pulse in the received waveform. Additionally, a counter clock technique was combined to compute the phase shifts of the last incomplete cycle for TOF. The presented system can obtain 0.1% TOF resolution for the period corresponding to the 40 kHz frequency ultrasonic wave. Consequently, with the integration of a humidity compensation algorithm, a highly accurate and high resolution temperature measurement can be achieved using the accurate TOF measurement. Experimental results indicate that the combined standard uncertainty of the temperature measurement is approximately 0.39 degrees C. The main advantages of this system are high resolution measurements, narrow bandwidth requirements, and ease of implementation.

  1. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  2. Exploratory Movement Generates Higher-Order Information That Is Sufficient for Accurate Perception of Scaled Egocentric Distance

    PubMed Central

    Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.

    2015-01-01

    Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410

  3. Accurate characterization of carcinogenic DNA adducts using MALDI tandem time-of-flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    Barnes, Charles A.; Chiu, Norman H. L.

    2009-01-01

    Many chemical carcinogens and their in vivo activated metabolites react readily with genomic DNA, and form covalently bound carcinogen-DNA adducts. Clinically, carcinogen-DNA adducts have been linked to various cancer diseases. Among the current methods for DNA adduct analysis, mass spectroscopic method allows the direct measurement of unlabeled DNA adducts. The goal of this study is to explore the use of matrix-assisted laser desorption/ionization tandem time-of-flight mass spectrometry (MALDI-TOF/TOF MS) to determine the identity of carcinogen-DNA adducts. Two of the known carcinogenic DNA adducts, namely N-(2'-deoxyguanosin-8-yl)-2-amino-1-methyl-6-phenyl-imidazo [4,5-b] pyridine (dG-C8-PhIP) and N-(2'-deoxyguanosin-8yl)-4-aminobiphenyl (dG-C8-ABP), were selected as our models. In MALDI-TOF MS measurements, the small matrix ion and its cluster ions did not interfere with the measurements of both selected dG adducts. To achieve a higher accuracy for the characterization of selected dG adducts, 1 keV collision energy in MALDI-TOF/TOF MS/MS was used to measure the adducts. In comparison to other MS/MS techniques with lower collision energies, more extensive precursor ion dissociations were observed. The detection of the corresponding fragment ions allowed the identities of guanine, PhIP or ABP, and the position of adduction to be confirmed. Some of the fragment ions of dG-C8-PhIP have not been reported by other MS/MS techniques.

  4. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    PubMed

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F

    2015-12-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  5. Extracting information in spike time patterns with wavelets and information theory.

    PubMed

    Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo

    2015-02-01

    We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information.

  6. Timing crisis information release via television.

    PubMed

    Wei, Jiuchang; Zhao, Dingtao; Yang, Feng; Du, Shaofu; Marinova, Dora

    2010-10-01

    When and how often to release information on television are important issues in crisis and emergency risk communication. There is a lot of crisis information, including warnings and news, to which people should have access, but most of it is not significantly urgent to interrupt the broadcasting of television programmes. Hence, the right timing for the release of crisis information should be selected based on the importance of the crisis and any associated communication requirements. Using recursive methods, this paper builds an audience coverage model of crisis information release. Based on 2007 Household Using TV (HUT) data for Hefei City, China, the optimal combination of broadcasting sequence (with frequencies between one and eight times) is obtained using the implicit enumeration method. The developed model is applicable to effective transmission of crisis information, with the aim of reducing interference with the normal television transmission process and decreasing the psychological effect on audiences. The same model can be employed for other purposes, such as news coverage and weather and road information. PMID:20572851

  7. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  8. Relativistic quantum information and time machines

    NASA Astrophysics Data System (ADS)

    Ralph, Timothy C.; Downes, Tony G.

    2012-01-01

    Relativistic quantum information combines the informational approach to understanding and using quantum mechanical systems - quantum information - with the relativistic view of the Universe. In this introductory review we examine key results to emerge from this new field of research in physics and discuss future directions. A particularly active area recently has been the question of what happens when quantum systems interact with general relativistic closed timelike curves - effectively time machines. We discuss two different approaches that have been suggested for modelling such situations. It is argued that the approach based on matching the density operator of the quantum state between the future and past most consistently avoids the paradoxes usually associated with time travel.

  9. Is the Posner Reaction Time Test More Accurate Than Clinical Tests in Detecting Left Neglect in Acute and Chronic Stroke?

    PubMed Central

    Rengachary, Jennifer; d'Avossa, Giovanni; Sapir, Ayelet; Shulman, Gordon L.; Corbetta, Maurizio

    2013-01-01

    Objective To compare the accuracy of common clinical tests for left neglect with that of a computerized reaction time Posner test in a stroke population. Design Neglect measures were collected longitudinally in stroke patients at the acute (≈2wk) and chronic (≈9mo) stage. Identical measures were collected in a healthy control group. Setting Inpatient and outpatient rehabilitation. Participants Acute stroke patients (n=59) with left neglect, 30 of whom were tested longitudinally; healthy age-matched controls (n=30). Interventions Not applicable. Main Outcome Measures A receiver operating characteristic analysis, ranking the measures' sensitivity and specificity using a single summary statistic. Results Most clinical tests were adequately accurate at the acute stage, but many were near chance at the chronic stage. The Posner test was the most sensitive test at both stages, the most sensitive variable being the reaction time difference for detecting targets appearing on the left compared to the right side. Conclusions Computerized reaction time tests can be used to screen for subtle but potentially clinically relevant left neglect, which may not be detectable by conventional clinical tests, especially at the chronic stage. Such tests may be useful to assess the severity of the patients' deficits and provide more accurate measures of the degree of recovery in clinical trials than established clinical measures. PMID:19969172

  10. Easy and accurate calculation of programmed temperature gas chromatographic retention times by back-calculation of temperature and hold-up time profiles.

    PubMed

    Boswell, Paul G; Carr, Peter W; Cohen, Jerry D; Hegeman, Adrian D

    2012-11-01

    Linear retention indices are commonly used to identify compounds in programmed-temperature gas chromatography (GC), but they are unreliable unless the original experimental conditions used to measure them are stringently reproduced. However, differences in many experimental conditions may be properly taken into account by calculating programmed-temperature retention times of compounds from their measured isothermal retention vs. temperature relationships. We call this approach "retention projection". Until now, retention projection has been impractical because it required very precise, meticulous measurement of the temperature vs. time and hold-up time vs. temperature profiles actually produced by a specific GC instrument to be accurate. Here we present a new, easy-to-use methodology to precisely measure those profiles: we spike a sample with 25 n-alkanes and use their measured, programmed-temperature retention times to precisely back-calculate what the instrument profiles must have been. Then, when we use those back-calculated profiles to project retention times of 63 chemically diverse compounds, we found that the projections are extremely accurate (e.g. to ±0.9 s in a 40 min ramp). They remained accurate with different temperature programs, GC instruments, inlet pressures, flow rates, and with columns taken from different batches of stationary phase while the accuracy of retention indices became worse the more the experimental conditions were changed from the original ones used to measure them. We also developed new, open-source software (http://www.retentionprediction.org/gc) to demonstrate the system.

  11. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  12. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  13. Some applications of GPS timing information

    NASA Technical Reports Server (NTRS)

    Zhang, Yinbai

    1994-01-01

    GPS Satellites transmit two independent signals. One is C/A code (course acquisition signal) which could be utilized by all civilian users. This service is called a Standard Positioning Service (SPS). Another is P code (precise signal) that could be only accessed by those authorized users. The service is called a precise positioning service (PPS). SPS C/A code users can navigate with accuracy of approximately 100 m and timing accuracy of about 250-300 ns (1 sigma). Such timing accuracy is not enough for precision time and frequency value transfer. For this reason we must do some research work to improve the SPS C/A code accuracy which SPS users can get. This paper presents our recent progress on timing information analysis, its data processing and its timing and frequency measurement applications.

  14. Accurate Enumeration of Aspergillus brasiliensis in Hair Color and Mascara by Time-Lapse Shadow Image Analysis.

    PubMed

    Ogawa, Hiroyuki; Matsuoka, Hideaki; Saito, Mikako

    2015-01-01

    The growth of black mold (Aspergillus brasiliensis) in black-colored samples such as hair color and mascara was measured with an automatic count system based on time-lapse shadow image analysis (TSIA). A. brasiliensis suspended in a lecithin and polysorbate (LP) solution of each sample (hair color or mascara) was spread on a potato dextrose agar medium plate containing LP. The background image darkness of the agar plate could be adjusted to attain accurate colony counts. 95 colonies in hair color and 22 colonies in mascara could be automatically determined at 48 h. The accuracy of the colony counts could be confirmed from the timelapse image data. In contrast, conventional visual counting at a specified time could not determine the number of colonies or led to false colony counts.

  15. MRI-aided tissues interface characterization: An accurate signal propagation time calculation method for UWB breast tumor imaging

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Xiao, Xia; Kikkawa, Takamaro

    2016-12-01

    Radar-based ultrawideband (UWB) microwave imaging is expected to be a safe, low-cost tool for breast cancer detection. However, since radar wave travels at different speeds in different tissues, propagation time is hard to be estimated in heterogeneous breast. Wrongly estimated propagation time leads to error of tumor location in resulting image, aka imaging error. In this paper, we develop a magnetic resonance imaging-aided (MRI-aided) propagation time calculation technique which is independent from radar imaging system but can help decrease the imaging error. The technique can eliminate the influence of the rough interface between fat layer and gland layer in breast and get relative accurate thicknesses of two layers. The propagation time in each layer is calculated and summed. The summed propagation time is used in Confocal imaging algorithm to increase the accuracy of resulting image. 25 patients' breast models with glands of varying size are classified into four categories for imaging simulation tests. Imaging accuracy in terms of tumor location along x-direction has been improved for 21 among 25 cases, as a result, overall around 50% improvement compared to conventional UWB imaging.

  16. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2013-05-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  17. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2012-12-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and were used to solve entirely different problems. We show that by combining two classical models, namely Boussinesq's Equation describing spring baseflow recession and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean residence time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater residence time that can refine those obtained from tritium measurements. This approach is demonstrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the best agreement between observed and predicted time of trend reversal was reached for the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating the stronger influence of continuous groundwater recharge during the dry period.

  18. Accurate retention time determination of co-eluting proteins in analytical chromatography by means of spectral data.

    PubMed

    Dismer, Florian; Hansen, Sigrid; Oelmeier, Stefan Alexander; Hubbuch, Jürgen

    2013-03-01

    Chromatography is the method of choice for the separation of proteins, at both analytical and preparative scale. Orthogonal purification strategies for industrial use can easily be implemented by combining different modes of adsorption. Nevertheless, with flexibility comes the freedom of choice and optimal conditions for consecutive steps need to be identified in a robust and reproducible fashion. One way to address this issue is the use of mathematical models that allow for an in silico process optimization. Although this has been shown to work, model parameter estimation for complex feedstocks becomes the bottleneck in process development. An integral part of parameter assessment is the accurate measurement of retention times in a series of isocratic or gradient elution experiments. As high-resolution analytics that can differentiate between proteins are often not readily available, pure protein is mandatory for parameter determination. In this work, we present an approach that has the potential to solve this problem. Based on the uniqueness of UV absorption spectra of proteins, we were able to accurately measure retention times in systems of up to four co-eluting compounds. The presented approach is calibration-free, meaning that prior knowledge of pure component absorption spectra is not required. Actually, pure protein spectra can be determined from co-eluting proteins as part of the methodology. The approach was tested for size-exclusion chromatograms of 38 mixtures of co-eluting proteins. Retention times were determined with an average error of 0.6 s (1.6% of average peak width), approximated and measured pure component spectra showed an average coefficient of correlation of 0.992.

  19. Improving resistance uniformity and endurance of resistive switching memory by accurately controlling the stress time of pulse program operation

    NASA Astrophysics Data System (ADS)

    Wang, Guoming; Long, Shibing; Yu, Zhaoan; Zhang, Meiyun; Ye, Tianchun; Li, Yang; Xu, Dinglin; Lv, Hangbing; Liu, Qi; Wang, Ming; Xu, Xiaoxin; Liu, Hongtao; Yang, Baohe; Suñé, Jordi; Liu, Ming

    2015-03-01

    In this letter, the impact of stress time of pulse program operation on the resistance uniformity and endurance of resistive random access memory (RRAM) is investigated. A width-adjusting pulse operation (WAPO) method which can accurately setup and measure switching time is proposed for improving the uniformity and endurance of RRAM. Different from the traditional single pulse operation (TSPO) method in which only one wide pulse is applied in each switching cycle, WAPO method utilizes a series of pulses with the width increased gradually until a set or reset switching process is completely finished and no excessive stress is produced. Our program/erase (P/E) method can exactly control the switching time and the final resistance and can significantly improve the uniformity, stability, and endurance of RRAM device. Improving resistance uniformity by WAPO compared with TSPO method is explained through the interdependence between resistance state and switching time. The endurance improvement by WAPO operation stems from the effective avoidance of the overstress-induced progressive-breakdown and even hard-breakdown to the conductive soft-breakdown path.

  20. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  1. Preferential access to genetic information from endogenous hominin ancient DNA and accurate quantitative SNP-typing via SPEX

    PubMed Central

    Brotherton, Paul; Sanchez, Juan J.; Cooper, Alan; Endicott, Phillip

    2010-01-01

    The analysis of targeted genetic loci from ancient, forensic and clinical samples is usually built upon polymerase chain reaction (PCR)-generated sequence data. However, many studies have shown that PCR amplification from poor-quality DNA templates can create sequence artefacts at significant levels. With hominin (human and other hominid) samples, the pervasive presence of highly PCR-amplifiable human DNA contaminants in the vast majority of samples can lead to the creation of recombinant hybrids and other non-authentic artefacts. The resulting PCR-generated sequences can then be difficult, if not impossible, to authenticate. In contrast, single primer extension (SPEX)-based approaches can genotype single nucleotide polymorphisms from ancient fragments of DNA as accurately as modern DNA. A single SPEX-type assay can amplify just one of the duplex DNA strands at target loci and generate a multi-fold depth-of-coverage, with non-authentic recombinant hybrids reduced to undetectable levels. Crucially, SPEX-type approaches can preferentially access genetic information from damaged and degraded endogenous ancient DNA templates over modern human DNA contaminants. The development of SPEX-type assays offers the potential for highly accurate, quantitative genotyping from ancient hominin samples. PMID:19864251

  2. An Accurate, Flexible and Small Optical Fiber Sensor: A Novel Technological Breakthrough for Real-Time Analysis of Dynamic Blood Flow Data In Vivo

    PubMed Central

    Yuan, Qiao-ying; Zhang, Ling; Xiao, Dan; Zhao, Kun; Lin, Chun; Si, Liang-yi

    2014-01-01

    Because of the limitations of existing methods and techniques for directly obtaining real-time blood data, no accurate microflow in vivo real-time analysis method exists. To establish a novel technical platform for real-time in vivo detection and to analyze average blood pressure and other blood flow parameters, a small, accurate, flexible, and nontoxic Fabry-Perot fiber sensor was designed. The carotid sheath was implanted through intubation of the rabbit carotid artery (n = 8), and the blood pressure and other detection data were determined directly through the veins. The fiber detection results were compared with test results obtained using color Doppler ultrasound and a physiological pressure sensor recorder. Pairwise comparisons among the blood pressure results obtained using the three methods indicated that real-time blood pressure information obtained through the fiber sensor technique exhibited better correlation than the data obtained with the other techniques. The highest correlation (correlation coefficient of 0.86) was obtained between the fiber sensor and pressure sensor. The blood pressure values were positively related to the total cholesterol level, low-density lipoprotein level, number of red blood cells, and hemoglobin level, with correlation coefficients of 0.033, 0.129, 0.358, and 0.373, respectively. The blood pressure values had no obvious relationship with the number of white blood cells and high-density lipoprotein and had a negative relationship with triglyceride levels, with a correlation coefficient of –0.031. The average ambulatory blood pressure measured by the fiber sensor exhibited a negative correlation with the quantity of blood platelets (correlation coefficient of −0.839, P<0.05). The novel fiber sensor can thus obtain in vivo blood pressure data accurately, stably, and in real time; the sensor can also determine the content and status of the blood flow to some extent. Therefore, the fiber sensor can obtain partially real-time

  3. An accurate, flexible and small optical fiber sensor: a novel technological breakthrough for real-time analysis of dynamic blood flow data in vivo.

    PubMed

    Yuan, Qiao-ying; Zhang, Ling; Xiao, Dan; Zhao, Kun; Lin, Chun; Si, Liang-yi

    2014-01-01

    Because of the limitations of existing methods and techniques for directly obtaining real-time blood data, no accurate microflow in vivo real-time analysis method exists. To establish a novel technical platform for real-time in vivo detection and to analyze average blood pressure and other blood flow parameters, a small, accurate, flexible, and nontoxic Fabry-Perot fiber sensor was designed. The carotid sheath was implanted through intubation of the rabbit carotid artery (n = 8), and the blood pressure and other detection data were determined directly through the veins. The fiber detection results were compared with test results obtained using color Doppler ultrasound and a physiological pressure sensor recorder. Pairwise comparisons among the blood pressure results obtained using the three methods indicated that real-time blood pressure information obtained through the fiber sensor technique exhibited better correlation than the data obtained with the other techniques. The highest correlation (correlation coefficient of 0.86) was obtained between the fiber sensor and pressure sensor. The blood pressure values were positively related to the total cholesterol level, low-density lipoprotein level, number of red blood cells, and hemoglobin level, with correlation coefficients of 0.033, 0.129, 0.358, and 0.373, respectively. The blood pressure values had no obvious relationship with the number of white blood cells and high-density lipoprotein and had a negative relationship with triglyceride levels, with a correlation coefficient of -0.031. The average ambulatory blood pressure measured by the fiber sensor exhibited a negative correlation with the quantity of blood platelets (correlation coefficient of -0.839, P<0.05). The novel fiber sensor can thus obtain in vivo blood pressure data accurately, stably, and in real time; the sensor can also determine the content and status of the blood flow to some extent. Therefore, the fiber sensor can obtain partially real-time

  4. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  5. Accurate and rapid identification of the Burkholderia pseudomallei near-neighbour, Burkholderia ubonensis, using real-time PCR.

    PubMed

    Price, Erin P; Sarovich, Derek S; Webb, Jessica R; Ginther, Jennifer L; Mayo, Mark; Cook, James M; Seymour, Meagan L; Kaestli, Mirjam; Theobald, Vanessa; Hall, Carina M; Busch, Joseph D; Foster, Jeffrey T; Keim, Paul; Wagner, David M; Tuanyok, Apichai; Pearson, Talima; Currie, Bart J

    2013-01-01

    Burkholderia ubonensis is an environmental bacterium belonging to the Burkholderia cepacia complex (Bcc), a group of genetically related organisms that are associated with opportunistic but generally nonfatal infections in healthy individuals. In contrast, the near-neighbour species Burkholderia pseudomallei causes melioidosis, a disease that can be fatal in up to 95% of cases if left untreated. B. ubonensis is frequently misidentified as B. pseudomallei from soil samples using selective culturing on Ashdown's medium, reflecting both the shared environmental niche and morphological similarities of these species. Additionally, B. ubonensis shows potential as an important biocontrol agent in B. pseudomallei-endemic regions as certain strains possess antagonistic properties towards B. pseudomallei. Current methods for characterising B. ubonensis are laborious, time-consuming and costly, and as such this bacterium remains poorly studied. The aim of our study was to develop a rapid and inexpensive real-time PCR-based assay specific for B. ubonensis. We demonstrate that a novel B. ubonensis-specific assay, Bu550, accurately differentiates B. ubonensis from B. pseudomallei and other species that grow on selective Ashdown's agar. We anticipate that Bu550 will catalyse research on B. ubonensis by enabling rapid identification of this organism from Ashdown's-positive colonies that are not B. pseudomallei.

  6. Near real time, accurate, and sensitive microbiological safety monitoring using an all-fibre spectroscopic fluorescence system

    NASA Astrophysics Data System (ADS)

    Vanholsbeeck, F.; Swift, S.; Cheng, M.; Bogomolny, E.

    2013-11-01

    Enumeration of microorganisms is an essential microbiological task for many industrial sectors and research fields. Various tests for detection and counting of microorganisms are used today. However most of the current methods to enumerate bacteria require either long incubation time for limited accuracy, or use complicated protocols along with bulky equipment. We have developed an accurate, all-fibre spectroscopic system to measure fluorescence signal in-situ. In this paper, we examine the potential of this setup for near real time bacteria enumeration in aquatic environment. The concept is based on a well-known phenomenon that the fluorescence quantum yields of some nucleic acid stains significantly increase upon binding with nucleic acids of microorganisms. In addition we have used GFP labeled organisms. The fluorescence signal increase can be correlated to the amount of nucleic acid present in the sample. In addition we have used GFP labeled organisms. Our results show that we are able to detect a wide range of bacteria concentrations without dilution or filtration (1-108 CFU/ml) using different optical probes we designed. This high sensitivity is due to efficient light delivery with an appropriate collection volume and in situ fluorescence detection as well as the use of a sensitive CCD spectrometer. By monitoring the laser power, we can account for laser fluctuations while measuring the fluorescence signal which improves as well the system accuracy. A synchronized laser shutter allows us to achieve a high SNR with minimal integration time, thereby reducing the photobleaching effect. In summary, we conclude that our optical setup may offer a robust method for near real time bacterial detection in aquatic environment.

  7. Accurate identification of Candida parapsilosis (sensu lato) by use of mitochondrial DNA and real-time PCR.

    PubMed

    Souza, Ana Carolina R; Ferreira, Renata C; Gonçalves, Sarah S; Quindós, Guillermo; Eraso, Elena; Bizerra, Fernando C; Briones, Marcelo R S; Colombo, Arnaldo L

    2012-07-01

    Candida parapsilosis is the Candida species isolated the second most frequently from blood cultures in South America and some European countries, such as Spain. Since 2005, this species has been considered a complex of 3 closely related species: C. parapsilosis, Candida metapsilosis, and Candida orthopsilosis. Here, we describe a real-time TaqMan-MGB PCR assay, using mitochondrial DNA (mtDNA) as the target, which readily distinguishes these 3 species. We first used comparative genomics to locate syntenic regions between these 3 mitochondrial genomes and then selected NADH5 as the target for the real-time PCR assay. Probes were designed to include a combination of different single-nucleotide polymorphisms that are able to differentiate each species within the C. parapsilosis complex. This new methodology was first tested using mtDNA and then genomic DNA from 4 reference and 5 clinical strains. For assay validation, a total of 96 clinical isolates and 4 American Type Culture Collection (ATCC) isolates previously identified by internal transcribed spacer (ITS) ribosomal DNA (rDNA) sequencing were tested. Real-time PCR using genomic DNA was able to differentiate the 3 species with 100% accuracy. No amplification was observed when DNA from other species was used as the template. We observed 100% congruence with ITS rDNA sequencing identification, including for 30 strains used in blind testing. This novel method allows a quick and accurate intracomplex identification of C. parapsilosis and saves time compared with sequencing, which so far has been considered the "gold standard" for Candida yeast identification. In addition, this assay provides a useful tool for epidemiological and clinical studies of these emergent species.

  8. Three-dimensional accurate detection of lung emphysema in rats using ultra-short and zero echo time MRI.

    PubMed

    Bianchi, Andrea; Tibiletti, Marta; Kjørstad, Åsmund; Birk, Gerald; Schad, Lothar R; Stierstorfer, Birgit; Rasche, Volker; Stiller, Detlef

    2015-11-01

    Emphysema is a life-threatening pathology that causes irreversible destruction of alveolar walls. In vivo imaging techniques play a fundamental role in the early non-invasive pre-clinical and clinical detection and longitudinal follow-up of this pathology. In the present study, we aimed to evaluate the feasibility of using high resolution radial three-dimensional (3D) zero echo time (ZTE) and 3D ultra-short echo time (UTE) MRI to accurately detect lung pathomorphological changes in a rodent model of emphysema.Porcine pancreas elastase (PPE) was intratracheally administered to the rats to produce the emphysematous changes. 3D ZTE MRI, low and high definition 3D UTE MRI and micro-computed tomography images were acquired 4 weeks after the PPE challenge. Signal-to-noise ratios (SNRs) were measured in PPE-treated and control rats. T2* values were computed from low definition 3D UTE MRI. Histomorphometric measurements were made after euthanizing the animals. Both ZTE and UTE MR images showed a significant decrease in the SNR measured in PPE-treated lungs compared with controls, due to the pathomorphological changes taking place in the challenged lungs. A significant decrease in T2* values in PPE-challenged animals compared with controls was measured using UTE MRI. Histomorphometric measurements showed a significant increase in the mean linear intercept in PPE-treated lungs. UTE yielded significantly higher SNR compared with ZTE (14% and 30% higher in PPE-treated and non-PPE-treated lungs, respectively).This study showed that optimized 3D radial UTE and ZTE MRI can provide lung images of excellent quality, with high isotropic spatial resolution (400 µm) and SNR in parenchymal tissue (>25) and negligible motion artifacts in freely breathing animals. These techniques were shown to be useful non-invasive instruments to accurately and reliably detect the pathomorphological alterations taking place in emphysematous lungs, without incurring the risks of cumulative radiation

  9. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  10. Accurate, fast and cost-effective diagnostic test for monosomy 1p36 using real-time quantitative PCR.

    PubMed

    Cunha, Pricila da Silva; Pena, Heloisa B; D'Angelo, Carla Sustek; Koiffmann, Celia P; Rosenfeld, Jill A; Shaffer, Lisa G; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5-0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  11. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    PubMed Central

    Cunha, Pricila da Silva; Pena, Heloisa B.; D'Angelo, Carla Sustek; Koiffmann, Celia P.; Rosenfeld, Jill A.; Shaffer, Lisa G.; Stofanko, Martin; Gonçalves-Dornelas, Higgor; Pena, Sérgio Danilo Junho

    2014-01-01

    Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH), which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR) assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH), and/or multiplex ligation-dependent probe amplification (MLPA) all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs. PMID:24839341

  12. Time-Accurate Unsteady Flow Simulations Supporting the SRM T+68-Second Pressure Spike Anomaly Investigation (STS-54B)

    NASA Technical Reports Server (NTRS)

    Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose

    1993-01-01

    Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The

  13. One at a time: counting single-nanoparticle/electrode collisions for accurate particle sizing by overcoming the instability of gold nanoparticles under electrolytic conditions.

    PubMed

    Qiu, Danfeng; Wang, Song; Zheng, Yuanqin; Deng, Zhaoxiang

    2013-12-20

    In response to an increasing demand for understanding electrochemical processes on the nanometer scale, it now becomes possible to monitor electron transfer reactions at the single-nanoparticle level, namely particle collision electrochemistry. This technique has great potential in the development of research tools towards single-particle electrocatalysis and selective and multiplexed particle sizing. However, one existing problem that may discourage these applications is the relatively weak colloidal stability of nanoparticles in an electrolytic solution. Here we report on a facile but efficient way to achieve a good stability of gold nanoparticles in an acidic media so that 'zero-aggregation' collisions can be achieved at a carbon ultramicroelectrode. This allows us to obtain anodic dissolution currents from individual nanoparticles in a 'one particle at a time' manner, based on which accurate particle sizing with a resolution of 1-2 nm can be achieved. Our work strongly suggests that to maintain a well dispersed nanoparticle solution during a particle impact electrochemical experiment is critically important for accurate particle sizing, as well as other applications that require information to be extracted from individual nanoparticles (not their aggregates).

  14. Assessment of gas chromatography time-of-flight accurate mass spectrometry for identification of volatile and semi-volatile compounds in honey.

    PubMed

    Moniruzzaman, M; Rodríguez, I; Ramil, M; Cela, R; Sulaiman, S A; Gan, S H

    2014-11-01

    The performance of gas chromatography (GC) combined with a hybrid quadrupole time-of-flight (QTOF) mass spectrometry (MS) system for the determination of volatile and semi-volatile compounds in honey samples is evaluated. After headspace (HS) solid-phase microextraction (SPME) of samples, the accurate mass capabilities of the above system were evaluated for compounds identification. Accurate scan electron impact (EI) MS spectra allowed discriminating compounds displaying the same nominal masses, but having different empirical formulae. Moreover, the use of a mass window with a width of 0.005 Da provided highly specific chromatograms for selected ions, avoiding the contribution of interferences to their peak areas. Additional information derived from positive chemical ionization (PCI) MS spectra and ion product scan MS/MS spectra permitted confirming the identity of novel compounds. The above possibilities are illustrated with examples of honey aroma compounds, belonging to different chemical classes and containing different elements in their molecules. Examples of compounds whose structures could not be described are also provided. Overall, 84 compounds, from a total of 89 species, could be identified in 19 honey samples from 3 different geographic areas in the world. The suitability of responses measured for selected ions, corresponding to above species, for authentication purposes is assessed through principal components analysis. PMID:25127626

  15. Accurate Time-Dependent Wave Packet Calculations for the O(+) + H2 → OH(+) + H Ion-Molecule Reaction.

    PubMed

    Bulut, N; Castillo, J F; Jambrina, P G; Kłos, J; Roncero, O; Aoiz, F J; Bañares, L

    2015-12-17

    Accurate quantum reactive scattering time-dependent wave packet close-coupling calculations have been carried out to determine total reaction probabilities and integral cross sections for the O(+) + H2 → OH(+) + H reaction in a range of collision energies from 10(-3) eV up to 1.0 eV for the H2 rovibrational states (v = 0; j = 0, 1, 2) and (v = 1; j = 0) using the potential energy surface (PES) by Martı́nez et al. As expected for a barrierless reaction, the reaction cross section decays rapidly with collision energy, Ec, following a behavior that nearly corresponds to that predicted by the Langevin model. Rotational excitation of H2 into j = 1, 2 has a very moderate effect on reactivity, similarly to what happens with vibrational excitation below Ec ≈ 0.3 eV. However, at higher collision energies the cross section increases notably when H2 is promoted to v = 1. This effect is explained by resorting to the effective potentials in the entrance channel. The integral cross sections have been used to calculate rate constants in the temperature range 200-1000 K. A good overall agreement has been found with the available experimental data on integral cross sections and rate constants. In addition, time-independent quantum mechanical and quasi-classical trajectory (QCT) calculations have been performed on the same PES aimed to compare the various methodologies and to discern the detailed mechanism of the title reaction. In particular, the analysis of individual trajectories has made it possible to explain, in terms of the coupling between reagent relative velocity and the topography of the PES, the presence of a series of alternating maxima and minima in the collision energy dependence of the QCT reaction probabilities for the reactions with H2(v=0,1,j=0), which are absent in the quantum mechanical calculations.

  16. AN ACCURATE ORBITAL INTEGRATOR FOR THE RESTRICTED THREE-BODY PROBLEM AS A SPECIAL CASE OF THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    SciTech Connect

    Minesaki, Yukitaka

    2013-08-01

    For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators.

  17. Accurate measurements of cross-plane thermal conductivity of thin films by dual-frequency time-domain thermoreflectance (TDTR).

    PubMed

    Jiang, Puqing; Huang, Bin; Koh, Yee Kan

    2016-07-01

    Accurate measurements of the cross-plane thermal conductivity Λcross of a high-thermal-conductivity thin film on a low-thermal-conductivity (Λs) substrate (e.g., Λcross/Λs > 20) are challenging, due to the low thermal resistance of the thin film compared with that of the substrate. In principle, Λcross could be measured by time-domain thermoreflectance (TDTR), using a high modulation frequency fh and a large laser spot size. However, with one TDTR measurement at fh, the uncertainty of the TDTR measurement is usually high due to low sensitivity of TDTR signals to Λcross and high sensitivity to the thickness hAl of Al transducer deposited on the sample for TDTR measurements. We observe that in most TDTR measurements, the sensitivity to hAl only depends weakly on the modulation frequency f. Thus, we performed an additional TDTR measurement at a low modulation frequency f0, such that the sensitivity to hAl is comparable but the sensitivity to Λcross is near zero. We then analyze the ratio of the TDTR signals at fh to that at f0, and thus significantly improve the accuracy of our Λcross measurements. As a demonstration of the dual-frequency approach, we measured the cross-plane thermal conductivity of a 400-nm-thick nickel-iron alloy film and a 3-μm-thick Cu film, both with an accuracy of ∼10%. The dual-frequency TDTR approach is useful for future studies of thin films. PMID:27475589

  18. Accurate measurement of circulating mitochondrial DNA content from human blood samples using real-time quantitative PCR.

    PubMed

    Ajaz, Saima; Czajka, Anna; Malik, Afshan

    2015-01-01

    We describe a protocol to accurately measure the amount of human mitochondrial DNA (MtDNA) in peripheral blood samples which can be modified to quantify MtDNA from other body fluids, human cells, and tissues. This protocol is based on the use of real-time quantitative PCR (qPCR) to quantify the amount of MtDNA relative to nuclear DNA (designated the Mt/N ratio). In the last decade, there have been increasing numbers of studies describing altered MtDNA or Mt/N in circulation in common nongenetic diseases where mitochondrial dysfunction may play a role (for review see Malik and Czajka, Mitochondrion 13:481-492, 2013). These studies are distinct from those looking at genetic mitochondrial disease and are attempting to identify acquired changes in circulating MtDNA content as an indicator of mitochondrial function. However, the methodology being used is not always specific and reproducible. As more than 95 % of the human mitochondrial genome is duplicated in the human nuclear genome, it is important to avoid co-amplification of nuclear pseudogenes. Furthermore, template preparation protocols can also affect the results because of the size and structural differences between the mitochondrial and nuclear genomes. Here we describe how to (1) prepare DNA from blood samples; (2) pretreat the DNA to prevent dilution bias; (3) prepare dilution standards for absolute quantification using the unique primers human mitochondrial genome forward primer (hMitoF3) and human mitochondrial genome reverse primer(hMitoR3) for the mitochondrial genome, and human nuclear genome forward primer (hB2MF1) and human nuclear genome reverse primer (hB2MR1) primers for the human nuclear genome; (4) carry out qPCR for either relative or absolute quantification from test samples; (5) analyze qPCR data; and (6) calculate the sample size to adequately power studies. The protocol presented here is suitable for high-throughput use.

  19. Accurate measurements of cross-plane thermal conductivity of thin films by dual-frequency time-domain thermoreflectance (TDTR).

    PubMed

    Jiang, Puqing; Huang, Bin; Koh, Yee Kan

    2016-07-01

    Accurate measurements of the cross-plane thermal conductivity Λcross of a high-thermal-conductivity thin film on a low-thermal-conductivity (Λs) substrate (e.g., Λcross/Λs > 20) are challenging, due to the low thermal resistance of the thin film compared with that of the substrate. In principle, Λcross could be measured by time-domain thermoreflectance (TDTR), using a high modulation frequency fh and a large laser spot size. However, with one TDTR measurement at fh, the uncertainty of the TDTR measurement is usually high due to low sensitivity of TDTR signals to Λcross and high sensitivity to the thickness hAl of Al transducer deposited on the sample for TDTR measurements. We observe that in most TDTR measurements, the sensitivity to hAl only depends weakly on the modulation frequency f. Thus, we performed an additional TDTR measurement at a low modulation frequency f0, such that the sensitivity to hAl is comparable but the sensitivity to Λcross is near zero. We then analyze the ratio of the TDTR signals at fh to that at f0, and thus significantly improve the accuracy of our Λcross measurements. As a demonstration of the dual-frequency approach, we measured the cross-plane thermal conductivity of a 400-nm-thick nickel-iron alloy film and a 3-μm-thick Cu film, both with an accuracy of ∼10%. The dual-frequency TDTR approach is useful for future studies of thin films.

  20. Accurate measurements of cross-plane thermal conductivity of thin films by dual-frequency time-domain thermoreflectance (TDTR)

    NASA Astrophysics Data System (ADS)

    Jiang, Puqing; Huang, Bin; Koh, Yee Kan

    2016-07-01

    Accurate measurements of the cross-plane thermal conductivity Λcross of a high-thermal-conductivity thin film on a low-thermal-conductivity (Λs) substrate (e.g., Λcross/Λs > 20) are challenging, due to the low thermal resistance of the thin film compared with that of the substrate. In principle, Λcross could be measured by time-domain thermoreflectance (TDTR), using a high modulation frequency fh and a large laser spot size. However, with one TDTR measurement at fh, the uncertainty of the TDTR measurement is usually high due to low sensitivity of TDTR signals to Λcross and high sensitivity to the thickness hAl of Al transducer deposited on the sample for TDTR measurements. We observe that in most TDTR measurements, the sensitivity to hAl only depends weakly on the modulation frequency f. Thus, we performed an additional TDTR measurement at a low modulation frequency f0, such that the sensitivity to hAl is comparable but the sensitivity to Λcross is near zero. We then analyze the ratio of the TDTR signals at fh to that at f0, and thus significantly improve the accuracy of our Λcross measurements. As a demonstration of the dual-frequency approach, we measured the cross-plane thermal conductivity of a 400-nm-thick nickel-iron alloy film and a 3-μm-thick Cu film, both with an accuracy of ˜10%. The dual-frequency TDTR approach is useful for future studies of thin films.

  1. BOOK REVIEW: Time, Quantum and Information

    NASA Astrophysics Data System (ADS)

    Turner, Leaf

    2004-04-01

    Time, Quantum and Information, a paean to Professor Carl Friedrich von Weizsäcker, commemorates his 90th birthday. The range of Professor Weizsäcker’s endeavours is an exhilarating example of what can be accomplished by one freely-soaring human spirit, who is at the same time a physicist, a philosopher, and a humanitarian. The editors, Lutz Castell and Otfried Ischebeck, have assembled an admirable collection of essays and articles written by Weizsäcker’s past students, collaborators, colleagues and acquaintances. Time, Quantum and Information offers the reader a panoply of unique insights into twentieth century science and history. Entangled with the stories about Weizsäcker’s influence on the lives of some of the contributors are discussions of the activities of German scientists during and following World War II, emphasizing their reluctance to work on atomic weapons following the war. By outlining Weizsäcker’s role in the early development of numerous tributaries of physical science, the book gives us a new glimpse into the origins of some of its disparate domains, such as nuclear physics, the physics of stellar nucleosynthesis, cosmic ray physics, fluid turbulence, and the formation of the solar system. We physicists have all studied Weizsäcker’s semi-empirical mass formula describing the binding energy of nuclei. We are aware too that both he and Hans Bethe independently discovered the nuclear cycles that provide stars with their enduring energy output. We have studied the Weizsäcker--Williams technique of calculating the bremsstrahlung of relativistic electrons. But how many of us know of Weizsäcker’s work in fluid turbulence that he, like Werner Heisenberg under whom he had earned his doctorate, pursued while holed up in Farm Hall? And how many of us are aware of his introduction of turbulent viscosity to account for the origin of planetary orbits, involving the migration of mass inwards and angular momentum outwards? Moreover, before

  2. BOOK REVIEW: Time, Quantum and Information

    NASA Astrophysics Data System (ADS)

    Turner, Leaf

    2004-04-01

    Time, Quantum and Information, a paean to Professor Carl Friedrich von Weizsäcker, commemorates his 90th birthday. The range of Professor Weizsäcker’s endeavours is an exhilarating example of what can be accomplished by one freely-soaring human spirit, who is at the same time a physicist, a philosopher, and a humanitarian. The editors, Lutz Castell and Otfried Ischebeck, have assembled an admirable collection of essays and articles written by Weizsäcker’s past students, collaborators, colleagues and acquaintances. Time, Quantum and Information offers the reader a panoply of unique insights into twentieth century science and history. Entangled with the stories about Weizsäcker’s influence on the lives of some of the contributors are discussions of the activities of German scientists during and following World War II, emphasizing their reluctance to work on atomic weapons following the war. By outlining Weizsäcker’s role in the early development of numerous tributaries of physical science, the book gives us a new glimpse into the origins of some of its disparate domains, such as nuclear physics, the physics of stellar nucleosynthesis, cosmic ray physics, fluid turbulence, and the formation of the solar system. We physicists have all studied Weizsäcker’s semi-empirical mass formula describing the binding energy of nuclei. We are aware too that both he and Hans Bethe independently discovered the nuclear cycles that provide stars with their enduring energy output. We have studied the Weizsäcker--Williams technique of calculating the bremsstrahlung of relativistic electrons. But how many of us know of Weizsäcker’s work in fluid turbulence that he, like Werner Heisenberg under whom he had earned his doctorate, pursued while holed up in Farm Hall? And how many of us are aware of his introduction of turbulent viscosity to account for the origin of planetary orbits, involving the migration of mass inwards and angular momentum outwards? Moreover, before

  3. The application of intraoperative transit time flow measurement to accurately assess anastomotic quality in sequential vein grafting

    PubMed Central

    Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong

    2013-01-01

    OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to

  4. Identification of Novel Perfluoroalkyl Ether Carboxylic Acids (PFECAs) and Sulfonic Acids (PFESAs) in Natural Waters Using Accurate Mass Time-of-Flight Mass Spectrometry (TOFMS).

    PubMed

    Strynar, Mark; Dagnino, Sonia; McMahen, Rebecca; Liang, Shuang; Lindstrom, Andrew; Andersen, Erik; McMillan, Larry; Thurman, Michael; Ferrer, Imma; Ball, Carol

    2015-10-01

    Recent scientific scrutiny and concerns over exposure, toxicity, and risk have led to international regulatory efforts resulting in the reduction or elimination of certain perfluorinated compounds from various products and waste streams. Some manufacturers have started producing shorter chain per- and polyfluorinated compounds to try to reduce the potential for bioaccumulation in humans and wildlife. Some of these new compounds contain central ether oxygens or other minor modifications of traditional perfluorinated structures. At present, there has been very limited information published on these "replacement chemistries" in the peer-reviewed literature. In this study we used a time-of-flight mass spectrometry detector (LC-ESI-TOFMS) to identify fluorinated compounds in natural waters collected from locations with historical perfluorinated compound contamination. Our workflow for discovery of chemicals included sequential sampling of surface water for identification of potential sources, nontargeted TOFMS analysis, molecular feature extraction (MFE) of samples, and evaluation of features unique to the sample with source inputs. Specifically, compounds were tentatively identified by (1) accurate mass determination of parent and/or related adducts and fragments from in-source collision-induced dissociation (CID), (2) in-depth evaluation of in-source adducts formed during analysis, and (3) confirmation with authentic standards when available. We observed groups of compounds in homologous series that differed by multiples of CF2 (m/z 49.9968) or CF2O (m/z 65.9917). Compounds in each series were chromatographically separated and had comparable fragments and adducts produced during analysis. We detected 12 novel perfluoroalkyl ether carboxylic and sulfonic acids in surface water in North Carolina, USA using this approach. A key piece of evidence was the discovery of accurate mass in-source n-mer formation (H(+) and Na(+)) differing by m/z 21.9819, corresponding to the

  5. Isotopic Ratio Outlier Analysis of the S. cerevisiae Metabolome Using Accurate Mass Gas Chromatography/Time-of-Flight Mass Spectrometry: A New Method for Discovery.

    PubMed

    Qiu, Yunping; Moir, Robyn; Willis, Ian; Beecher, Chris; Tsai, Yu-Hsuan; Garrett, Timothy J; Yost, Richard A; Kurland, Irwin J

    2016-03-01

    Isotopic ratio outlier analysis (IROA) is a (13)C metabolomics profiling method that eliminates sample to sample variance, discriminates against noise and artifacts, and improves identification of compounds, previously done with accurate mass liquid chromatography/mass spectrometry (LC/MS). This is the first report using IROA technology in combination with accurate mass gas chromatography/time-of-flight mass spectrometry (GC/TOF-MS), here used to examine the S. cerevisiae metabolome. S. cerevisiae was grown in YNB media, containing randomized 95% (13)C, or 5%(13)C glucose as the single carbon source, in order that the isotopomer pattern of all metabolites would mirror the labeled glucose. When these IROA experiments are combined, the abundance of the heavy isotopologues in the 5%(13)C extracts, or light isotopologues in the 95%(13)C extracts, follows the binomial distribution, showing mirrored peak pairs for the molecular ion. The mass difference between the (12)C monoisotopic and the (13)C monoisotopic equals the number of carbons in the molecules. The IROA-GC/MS protocol developed, using both chemical and electron ionization, extends the information acquired from the isotopic peak patterns for formulas generation. The process that can be formulated as an algorithm, in which the number of carbons, as well as the number of methoximations and silylations are used as search constraints. In electron impact (EI/IROA) spectra, the artifactual peaks are identified and easily removed, which has the potential to generate "clean" EI libraries. The combination of chemical ionization (CI) IROA and EI/IROA affords a metabolite identification procedure that enables the identification of coeluting metabolites, and allowed us to characterize 126 metabolites in the current study. PMID:26820234

  6. 76 FR 42536 - Real-Time System Management Information Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... System Management Information Program on November 8, 2010, at 75 FR 68418. The final rule document also... Federal Highway Administration 23 CFR Part 511 RIN 2125-AF19 Real-Time System Management Information... available and share traffic and travel conditions information via real-time information programs as...

  7. Accurate and in situ monitoring of bacterial concentration using a real time all-fibre spectroscopic device

    NASA Astrophysics Data System (ADS)

    Tao, W.; McGoverin, C.; Lydiard, S.; Song, Y.; Cheng, M.; Swift, S.; Singhal, N.; Vanholsbeeck, F.

    2015-07-01

    Accurate in situ monitoring of bacterial transport is important for increased understanding and improvement of bioremediation processes where microorganisms convert toxic compounds to more benign compounds. Bioremediation methods have become the preferred mechanism for the rehabilitation of hard to reach contaminated environments. In this study, we have used fluorescence spectroscopy to monitor the movement of fluorescently labelled bacteria (Rhodococcus erythropolis and Pseudomonas putida) within a bench-top column filled with a porous medium. In situ fluorescence measurements made using a fibre optic based instrument (`optrode') were compared to ex situ measurements made using a plate reader. In situ monitoring using this fibre optic based instrument is a promising alternative to ex situ measurements as the initial flow of bacteria is reliably observed. However, a greater understanding of the effect of the porous medium on fluorescence measurements is required to develop an accurate calibration for bacterial concentration based in situ measurements.

  8. IrisPlex: a sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information.

    PubMed

    Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred

    2011-06-01

    A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.

  9. The Lunar Laser Ranging Experiment: Accurate ranges have given a large improvement in the lunar orbit and new selenophysical information.

    PubMed

    Bender, P L; Currie, D G; Poultney, S K; Alley, C O; Dicke, R H; Wilkinson, D T; Eckhardt, D H; Faller, J E; Kaula, W M; Mulholland, J D; Plotkin, H H; Silverberg, E C; Williams, J G

    1973-10-19

    The lunar ranging measurements now being made at the McDonald Observatory have an accuracy of 1 nsec in round-trip travel time. This corresponds to 15 cm in the one-way distance. The use of lasers with pulse-lengths of less than 1 nsec is expected to give an accuracy of 2 to 3 cm in the next few years. A new station is under construction in Hawaii, and additional stations in other countries are either in operation or under development. It is hoped that these stations will form the basis for a worldwide network to determine polar motion and earth rotation on a regular basis, and will assist in providing information about movement of the tectonic plates making up the earth's surface. Several mobile lunar ranging stations with telescopes having diameters of 1.0 m or less could, in the future, greatly extend the information obtainable about motions within and between the tectonic plates. The data obtained so far by the McDonald Observatory have been used to generate a new lunar ephemeris based on direct numerical integration of the equations of motion for the moon and planets. With this ephemeris, the range to the three Apollo retro-reflectors can be fit to an accuracy of 5 m by adjusting the differences in moments of inertia of the moon about its principal axes, the selenocentric coordinates of the reflectors, and the McDonald longitude. The accuracy of fitting the results is limited currently by errors of the order of an arc second in the angular orientation of the moon, as derived from the best available theory of how the moon rotates in response to the torques acting on it. Both a new calculation of the moon's orientation as a function of time based on direct numerical integration of the torque equations and a new analytic theory of the moon's orientation are expected to be available soon, and to improve considerably the accuracy of fitting the data. The accuracy already achieved routinely in lunar laser ranging represents a hundredfold improvement over any

  10. The Lunar Laser Ranging Experiment: Accurate ranges have given a large improvement in the lunar orbit and new selenophysical information.

    PubMed

    Bender, P L; Currie, D G; Poultney, S K; Alley, C O; Dicke, R H; Wilkinson, D T; Eckhardt, D H; Faller, J E; Kaula, W M; Mulholland, J D; Plotkin, H H; Silverberg, E C; Williams, J G

    1973-10-19

    The lunar ranging measurements now being made at the McDonald Observatory have an accuracy of 1 nsec in round-trip travel time. This corresponds to 15 cm in the one-way distance. The use of lasers with pulse-lengths of less than 1 nsec is expected to give an accuracy of 2 to 3 cm in the next few years. A new station is under construction in Hawaii, and additional stations in other countries are either in operation or under development. It is hoped that these stations will form the basis for a worldwide network to determine polar motion and earth rotation on a regular basis, and will assist in providing information about movement of the tectonic plates making up the earth's surface. Several mobile lunar ranging stations with telescopes having diameters of 1.0 m or less could, in the future, greatly extend the information obtainable about motions within and between the tectonic plates. The data obtained so far by the McDonald Observatory have been used to generate a new lunar ephemeris based on direct numerical integration of the equations of motion for the moon and planets. With this ephemeris, the range to the three Apollo retro-reflectors can be fit to an accuracy of 5 m by adjusting the differences in moments of inertia of the moon about its principal axes, the selenocentric coordinates of the reflectors, and the McDonald longitude. The accuracy of fitting the results is limited currently by errors of the order of an arc second in the angular orientation of the moon, as derived from the best available theory of how the moon rotates in response to the torques acting on it. Both a new calculation of the moon's orientation as a function of time based on direct numerical integration of the torque equations and a new analytic theory of the moon's orientation are expected to be available soon, and to improve considerably the accuracy of fitting the data. The accuracy already achieved routinely in lunar laser ranging represents a hundredfold improvement over any

  11. Children's Processing and Comprehension of Complex Sentences Containing Temporal Connectives: The Influence of Memory on the Time Course of Accurate Responses

    ERIC Educational Resources Information Center

    Blything, Liam P.; Cain, Kate

    2016-01-01

    In a touch-screen paradigm, we recorded 3- to 7-year-olds' (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing "before" and "after". Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order…

  12. Information display and interaction in real-time environments

    NASA Technical Reports Server (NTRS)

    Bocast, A. K.

    1983-01-01

    The available information bandwidth as a funcion of system's complexity and time constraints in a real time control environment were examined. Modern interactive graphics techniques provide very high bandwidth data displays. In real time control environments, effective information interaction rates are a function not only of machine data technologies but of human information processing capabilities and the four dimensional resolution of available interaction techniques. The available information bandwidth as a function of system's complexity and time constraints in a real time control environment were examined.

  13. An accurate treatment of diffuse reflection boundary conditions for a stochastic particle Fokker-Planck algorithm with large time steps

    NASA Astrophysics Data System (ADS)

    Önskog, Thomas; Zhang, Jun

    2015-12-01

    In this paper, we present a stochastic particle algorithm for the simulation of flows of wall-confined gases with diffuse reflection boundary conditions. Based on the theoretical observation that the change in location of the particles consists of a deterministic part and a Wiener process if the time scale is much larger than the relaxation time, a new estimate for the first hitting time at the boundary is obtained. This estimate facilitates the construction of an algorithm with large time steps for wall-confined flows. Numerical simulations verify that the proposed algorithm reproduces the correct boundary behaviour.

  14. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR

    PubMed Central

    Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  15. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    PubMed

    Li, XueYan; Cheng, JinYun; Zhang, Jing; Teixeira da Silva, Jaime A; Wang, ChunXia; Sun, HongMei

    2015-01-01

    Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446

  16. A generalized tool for accurate time-domain separation of excited modes in spin-torque oscillators

    SciTech Connect

    Siracusano, Giulio Puliafito, Vito; Finocchio, Giovanni

    2014-05-07

    We propose and develop an advanced signal processing technique that, combined with micromagnetic simulations, is able to deeply describe the non-stationary behavior of spin-torque oscillators, both in terms of time domain and spatial distribution of the magnetization dynamics. The Hilbert-Huang Transform is used for the identification of the time traces of each oscillation in a multimode excitation and enhanced with masking signals and the Ensemble Empirical Mode Decomposition. We emphasize that the technique developed here is general and can be used for any physical non-linear system in the presence of multimode dynamical excitation or intermittence.

  17. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2008-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  18. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2010-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  19. A rapid and accurate quantification method for real-time dynamic analysis of cellular lipids during microalgal fermentation processes in Chlorella protothecoides with low field nuclear magnetic resonance.

    PubMed

    Wang, Tao; Liu, Tingting; Wang, Zejian; Tian, Xiwei; Yang, Yi; Guo, Meijin; Chu, Ju; Zhuang, Yingping

    2016-05-01

    The rapid and real-time lipid determination can provide valuable information on process regulation and optimization in the algal lipid mass production. In this study, a rapid, accurate and precise quantification method of in vivo cellular lipids of Chlorella protothecoides using low field nuclear magnetic resonance (LF-NMR) was newly developed. LF-NMR was extremely sensitive to the algal lipids with the limits of the detection (LOD) of 0.0026g and 0.32g/L in dry lipid samples and algal broth, respectively, as well as limits of quantification (LOQ) of 0.0093g and 1.18g/L. Moreover, the LF-NMR signal was specifically proportional to the cellular lipids of C. protothecoides, thus the superior regression curves existing in a wide detection range from 0.02 to 0.42g for dry lipids and from 1.12 to 8.97gL(-1) of lipid concentration for in vivo lipid quantification were obtained with all R(2) higher than 0.99, irrespective of the lipid content and fatty acids profile variations. The accuracy of this novel method was further verified to be reliable by comparing lipid quantification results to those obtained by GC-MS. And the relative standard deviation (RSD) of LF-NMR results were smaller than 2%, suggesting the precision of this method. Finally, this method was successfully used in the on-line lipid monitoring during the algal lipid fermentation processes, making it possible for better understanding of the lipid accumulation mechanism and dynamic bioprocess control. PMID:26948045

  20. Latency-information theory and applications: Part I. On the discovery of the time dual for information theory

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2008-04-01

    As part of research conducted on the design of an efficient clutter covariance processor for DARPA's knowledge aided sensor signal processing expert reasoning (KASSPER) program a time-dual for information theory was discovered and named latency theory, this theory is discussed in this first of a multi-paper series. While information theory addresses the design of communication systems, latency theory does the same for recognition systems. Recognition system is the name given to the time dual of a communication system. A recognition system uses prior-knowledge about a signal-processor's input to enable the sensing of its output by a processing-time limited sensor when the fastest possible signal-processor replacement cannot achieve this task. A processor-coder is the time dual of a source coder. While a source coder replaces a signal-source to yield a smaller sourced-space in binary digits (bits) units a processor coder replaces a signal-processor to yield a smaller processing-time in binary operators (bors) units. A sensor coder is the time dual of a channel coder. While a channel coder identifies the necessary overhead-knowledge for accurate communications a sensor coder identifies the necessary prior-knowledge for accurate recognitions. In the second of this multipaper series latency theory is successfully illustrated with real-world knowledge aided radar.

  1. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  2. The role of cognitive switching in head-up displays. [to determine pilot ability to accurately extract information from either of two sources

    NASA Technical Reports Server (NTRS)

    Fischer, E.

    1979-01-01

    The pilot's ability to accurately extract information from either one or both of two superimposed sources of information was determined. Static, aerial, color 35 mm slides of external runway environments and slides of corresponding static head-up display (HUD) symbology were used as the sources. A three channel tachistoscope was utilized to show either the HUD alone, the scene alone, or the two slides superimposed. Cognitive performance of the pilots was assessed by determining the percentage of correct answers given to two HUD related questions, two scene related questions, or one HUD and one scene related question.

  3. A Non-Dissipative Staggered Fourth-Order Accurate Explicit Finite Difference Scheme for the Time-Domain Maxwell's Equations

    NASA Technical Reports Server (NTRS)

    Yefet, Amir; Petropoulos, Peter G.

    1999-01-01

    We consider a divergence-free non-dissipative fourth-order explicit staggered finite difference scheme for the hyperbolic Maxwell's equations. Special one-sided difference operators are derived in order to implement the scheme near metal boundaries and dielectric interfaces. Numerical results show the scheme is long-time stable, and is fourth-order convergent over complex domains that include dielectric interfaces and perfectly conducting surfaces. We also examine the scheme's behavior near metal surfaces that are not aligned with the grid axes, and compare its accuracy to that obtained by the Yee scheme.

  4. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    NASA Technical Reports Server (NTRS)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  5. An efficient and accurate approximation to time-dependent density functional theory for systems of weakly coupled monomers

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Herbert, John M.

    2015-07-01

    A novel formulation of time-dependent density functional theory (TDDFT) is derived, based on non-orthogonal, absolutely-localized molecular orbitals (ALMOs). We call this approach TDDFT(MI), in reference to ALMO-based methods for describing molecular interactions (MI) that have been developed for ground-state applications. TDDFT(MI) is intended for efficient excited-state calculations in systems composed of multiple, weakly interacting chromophores. The efficiency is based upon (1) a local excitation approximation; (2) monomer-based, singly-excited basis states; (3) an efficient localization procedure; and (4) a one-step Davidson method to solve the TDDFT(MI) working equation. We apply this methodology to study molecular dimers, water clusters, solvated chromophores, and aggregates of naphthalene diimide that form the building blocks of self-assembling organic nanotubes. Absolute errors of 0.1-0.3 eV with respect to supersystem methods are achievable for these systems, especially for cases involving an excited chromophore that is weakly coupled to several explicit solvent molecules. Excited-state calculations in an aggregate of nine naphthalene diimide monomers are ˜40 times faster than traditional TDDFT calculations.

  6. Simple, accurate, and efficient implementation of 1-electron atomic time-dependent Schrödinger equation in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Patchkovskii, Serguei; Muller, H. G.

    2016-02-01

    Modelling atomic processes in intense laser fields often relies on solving the time-dependent Schrödinger equation (TDSE). For processes involving ionisation, such as above-threshold ionisation (ATI) and high-harmonic generation (HHG), this is a formidable task even if only one electron is active. Several powerful ideas for efficient implementation of atomic TDSE were introduced by H.G. Muller some time ago (Muller, 1999), including: separation of Hamiltonian terms into tri-diagonal parts; implicit representation of the spatial derivatives; and use of a rotating reference frame. Here, we extend these techniques to allow for non-uniform radial grids, arbitrary laser field polarisation, and non-Hermitian terms in the Hamiltonian due to the implicit form of the derivatives (previously neglected). We implement the resulting propagator in a parallel Fortran program, adapted for multi-core execution. Cost of TDSE propagation scales linearly with the problem size, enabling full-dimensional calculations of strong-field ATI and HHG spectra for arbitrary field polarisations on a standard desktop PC.

  7. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  8. Towards Efficient and Accurate Description of Many-Electron Problems: Developments of Static and Time-Dependent Electronic Structure Methods

    NASA Astrophysics Data System (ADS)

    Ding, Feizhi

    Understanding electronic behavior in molecular and nano-scale systems is fundamental to the development and design of novel technologies and materials for application in a variety of scientific contexts from fundamental research to energy conversion. This dissertation aims to provide insights into this goal by developing novel methods and applications of first-principle electronic structure theory. Specifically, we will present new methods and applications of excited state multi-electron dynamics based on the real-time (RT) time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) formalism, and new development of the multi-configuration self-consist field theory (MCSCF) for modeling ground-state electronic structure. The RT-TDHF/TDDFT based developments and applications can be categorized into three broad and coherently integrated research areas: (1) modeling of the interaction between moleculars and external electromagnetic perturbations. In this part we will first prove both analytically and numerically the gauge invariance of the TDHF/TDDFT formalisms, then we will present a novel, efficient method for calculating molecular nonlinear optical properties, and last we will study quantum coherent plasmon in metal namowires using RT-TDDFT; (2) modeling of excited-state charge transfer in molecules. In this part, we will investigate the mechanisms of bridge-mediated electron transfer, and then we will introduce a newly developed non-equilibrium quantum/continuum embedding method for studying charge transfer dynamics in solution; (3) developments of first-principles spin-dependent many-electron dynamics. In this part, we will present an ab initio non-relativistic spin dynamics method based on the two-component generalized Hartree-Fock approach, and then we will generalized it to the two-component TDDFT framework and combine it with the Ehrenfest molecular dynamics approach for modeling the interaction between electron spins and nuclear

  9. Accurate relative-phase and time-delay maps all over the emission cone of hyperentangled photon source

    NASA Astrophysics Data System (ADS)

    Hegazy, Salem F.; El-Azab, Jala; Badr, Yehia A.; Obayya, Salah S. A.

    2016-04-01

    High flux of hyperentangled photons entails collecting the two-photon emission over relatively wide extent in frequency and transverse space within which the photon pairs are simultaneously entangled in multiple degrees of freedom. In this paper, we present a numerical approach to determining the spatial-spectral relative-phase and time-delay maps of hyperentangled photons all over the spontaneous parametric down conversion (SPDC) emission cone. We consider the hyperentangled-photons produced by superimposing noncollinear SPDC emissions of two crossed and coherentlypumped nonlinear crystals. We adopt a vectorial representation for all parameters of concern. This enables us to study special settings such as the self-compensation via oblique pump incidence. While rigorous quantum treatment of SPDC emission requires Gaussian state representation, in low-gain regime (like the case of the study), it is well approximated to the first order to superposition of vacuum and two-photon states. The relative phase and time-delay maps are then calculated between the two-photon wavepackets created along symmetrical locations of the crystals. Assuming monochromatic plane-wave pump field, the mutual signal-idler relations like energy conservation and transversemomentum conservation define well one of the two-photon with reference to its conjugate. The weaker conservation of longitudinal momentum (due to relatively thin crystals) allows two-photon emission directions coplanar with the pump beam while spreading around the perfect phase-matching direction. While prior works often adopt first-order approximation, it is shown that the relative-phase map is a very well approximated to a quadratic function in the polar angle of the two-photon emission while negligibly varying with the azimuthal angle.

  10. Novel real-time simultaneous amplification and testing method to accurately and rapidly detect Mycobacterium tuberculosis complex.

    PubMed

    Cui, Zhenling; Wang, Yongzhong; Fang, Liang; Zheng, Ruijuan; Huang, Xiaochen; Liu, Xiaoqin; Zhang, Gang; Rui, Dongmei; Ju, Jinliang; Hu, Zhongyi

    2012-03-01

    The aim of this study was to establish and evaluate a simultaneous amplification and testing method for detection of the Mycobacterium tuberculosis complex (SAT-TB assay) in clinical specimens by using isothermal RNA amplification and real-time fluorescence detection. In the SAT-TB assay, a 170-bp M. tuberculosis 16S rRNA fragment is reverse transcribed to DNA by use of Moloney murine leukemia virus (M-MLV) reverse transcriptase, using specific primers incorporating the T7 promoter sequence, and undergoes successive cycles of amplification using T7 RNA polymerase. Using a real-time PCR instrument, hybridization of an internal 6-carboxyfluorescein-4-[4-(dimethylamino)phenylazo] benzoic acid N-succinimidyl ester (FAM-DABCYL)-labeled fluorescent probe can be used to detect RNA amplification. The SAT-TB assay takes less than 1.5 h to perform, and the sensitivity of the assay for detection of M. tuberculosis H37Rv is 100 CFU/ml. The TB probe has no cross-reactivity with nontuberculous mycobacteria or other common respiratory tract pathogens. For 253 pulmonary tuberculosis (PTB) specimens and 134 non-TB specimens, the SAT-TB results correlated with 95.6% (370/387 specimens) of the Bactec MGIT 960 culture assay results. The sensitivity, specificity, and positive and negative predictive values of the SAT-TB test for the diagnosis of PTB were 67.6%, 100%, 100%, and 62.0%, respectively, compared to 61.7%, 100%, 100%, and 58.0% for Bactec MGIT 960 culture. For PTB diagnosis, the sensitivities of the SAT-TB and Bactec MGIT 960 culture methods were 97.6% and 95.9%, respectively, for smear-positive specimens and 39.2% and 30.2%, respectively, for smear-negative specimens. In conclusion, the SAT-TB assay is a novel, simple test with a high specificity which may enhance the detection rate of TB. It is therefore a promising tool for rapid diagnosis of M. tuberculosis infection in clinical microbiology laboratories.

  11. ACCURATE TIME-DEPENDENT WAVE PACKET STUDY OF THE H{sup +}+LiH REACTION AT EARLY UNIVERSE CONDITIONS

    SciTech Connect

    Aslan, E.; Bulut, N.; Castillo, J. F.; Banares, L.; Aoiz, F. J.; Roncero, O.

    2012-11-01

    The dynamics and kinetics of the H{sup +} + LiH reaction have been studied using a quantum reactive time-dependent wave packet (TDWP) coupled-channel quantum mechanical method on an ab initio potential energy surface at conditions of the early universe. The total reaction probabilities for the H{sup +} + LiH(v = 0, j = 0) {yields} H{sup +} {sub 2} + Li process have been calculated from 5 Multiplication-Sign 10{sup -3} eV up to 1 eV for total angular momenta J from 0 to 110. Using a Langevin model, integral cross sections have been calculated in that range of collision energies and extrapolated for energies below 5 Multiplication-Sign 10{sup -3} eV. The calculated rate constants are found to be nearly independent of temperature in the 10-1000 K interval with a value of Almost-Equal-To 10{sup -9} cm{sup 3} s{sup -1}, which is in good agreement with estimates used in evolutionary models of the early universe lithium chemistry.

  12. A two-parameter kinetic model based on a time-dependent activity coefficient accurately describes enzymatic cellulose digestion

    PubMed Central

    Kostylev, Maxim; Wilson, David

    2014-01-01

    Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567

  13. Children’s Processing and Comprehension of Complex Sentences Containing Temporal Connectives: The Influence of Memory on the Time Course of Accurate Responses

    PubMed Central

    2016-01-01

    In a touch-screen paradigm, we recorded 3- to 7-year-olds’ (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing before and after. Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order of events: “She drank the juice, before she walked in the park” (chronological order) versus “Before she walked in the park, she drank the juice” (reverse order). Differences in RTs for correct responses varied by sentence type: accurate responses were made more speedily for sentences that afforded an incremental processing of meaning. An independent measure of memory predicted this pattern of performance. We discuss these findings in relation to children’s knowledge of connective meaning and the processing requirements of sentences containing temporal connectives. PMID:27690492

  14. Mapping Rise Time Information with Down-Shift Analysis

    SciTech Connect

    Tunnell, T. W., Machorro, E. A., Diaz, A. B.

    2011-11-01

    These viewgraphs summarize the application of recent developments in digital down-shift (DDS) analysis of up converted PDV data to map out how well the PDV diagnostic would capture rise time information (mid point and rise time) in short rise time (<1 ns) shock events. The mapping supports a PDV vs VISAR challenge. The analysis concepts are new (~September FY 2011), simple, and run quickly, which makes them good tools to map out (with ~1 million Monte Carlo simulations) how well PDV captures rise time information as function of baseline velocity, rise time, velocity jump, and signal-to-noise ratios.

  15. Downdating a time-varying square root information filter

    NASA Technical Reports Server (NTRS)

    Muellerschoen, Ronald J.

    1990-01-01

    A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.

  16. Subjective sense of memory strength and the objective amount of information accurately remembered are related to distinct neural correlates at encoding.

    PubMed

    Qin, Shaozheng; van Marle, Hein J F; Hermans, Erno J; Fernández, Guillén

    2011-06-15

    Although commonly used, the term memory strength is not well defined in humans. Besides durability, it has been conceptualized by retrieval characteristics, such as subjective confidence associated with retrieval, or objectively, by the amount of information accurately retrieved. Behaviorally, these measures are not necessarily correlated, indicating that distinct neural processes may underlie them. Thus, we aimed at disentangling neural activity at encoding associated with either a subsequent subjective sense of memory strength or with a subsequent objective amount of information remembered. Using functional magnetic resonance imaging (fMRI), participants were scanned while incidentally encoding a series of photographs of complex scenes. The next day, they underwent two memory tests, quantifying memory strength either subjectively (confidence on remembering the gist of a scene) or objectively (the number of details accurately remembered within a scene). Correlations between these measurements were mutually partialed out in subsequent memory analyses of fMRI data. Results revealed that activation in left ventral lateral prefrontal cortex and temporoparietal junction predicted subsequent confidence ratings. In contrast, parahippocampal and hippocampal activity predicted the number of details remembered. Our findings suggest that memory strength may reflect a functionally heterogeneous set of (at least two) phenomena. One phenomenon appears related to prefrontal and temporoparietal top-down modulations, resulting in the subjective sense of memory strength that is potentially based on gist memory. The other phenomenon is likely related to medial-temporal binding processes, determining the amount of information accurately encoded into memory. Thus, our study dissociated two distinct phenomena that are usually described as memory strength.

  17. SU-D-18C-05: Variable Bolus Arterial Spin Labeling MRI for Accurate Cerebral Blood Flow and Arterial Transit Time Mapping

    SciTech Connect

    Johnston, M; Jung, Y

    2014-06-01

    Purpose: Arterial spin labeling (ASL) is an MRI perfusion imaging method from which quantitative cerebral blood flow (CBF) maps can be calculated. Acquisition with variable post-labeling delays (PLD) and variable TRs allows for arterial transit time (ATT) mapping and leads to more accurate CBF quantification with a scan time saving of 48%. In addition, T1 and M0 maps can be obtained without a separate scan. In order to accurately estimate ATT and T1 of brain tissue from the ASL data, variable labeling durations were invented, entitled variable-bolus ASL. Methods: All images were collected on a healthy subject with a 3T Siemens Skyra scanner. Variable-bolus Psuedo-continuous ASL (PCASL) images were collected with 7 TI times ranging 100-4300ms in increments of 700ms with TR ranging 1000-5200ms. All boluses were 1600ms when the TI allowed, otherwise the bolus duration was 100ms shorter than the TI. All TI times were interleaved to reduce sensitivity to motion. Voxel-wise T1 and M0 maps were estimated using a linear least squares fitting routine from the average singal from each TI time. Then pairwise subtraction of each label/control pair and averaging for each TI time was performed. CBF and ATT maps were created using the standard model by Buxton et al. with a nonlinear fitting routine using the T1 tissue map. Results: CBF maps insensitive to ATT were produced along with ATT maps. Both maps show patterns and averages consistent with literature. The T1 map also shows typical T1 contrast. Conclusion: It has been demonstrated that variablebolus ASL produces CBF maps free from the errors due to ATT and tissue T1 variations and provides M0, T1, and ATT maps which have potential utility. This is accomplished with a single scan in a feasible scan time (under 6 minutes) with low sensivity to motion.

  18. Quantum mutual information and the one-time pad

    SciTech Connect

    Schumacher, Benjamin; Westmoreland, Michael D.

    2006-10-15

    Alice and Bob share a correlated composite quantum system AB. If AB is used as the key for a one-time pad cryptographic system, we show that the maximum amount of information that Alice can send securely to Bob is the quantum mutual information of AB.

  19. Judging the Probability of Hypotheses Versus the Impact of Evidence: Which Form of Inductive Inference Is More Accurate and Time-Consistent?

    PubMed

    Tentori, Katya; Chater, Nick; Crupi, Vincenzo

    2016-04-01

    Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability.

  20. Accurate molecular dynamics and nuclear quantum effects at low cost by multiple steps in real and imaginary time: Using density functional theory to accelerate wavefunction methods.

    PubMed

    Kapil, V; VandeVondele, J; Ceriotti, M

    2016-02-01

    The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.

  1. Accurate molecular dynamics and nuclear quantum effects at low cost by multiple steps in real and imaginary time: Using density functional theory to accelerate wavefunction methods

    NASA Astrophysics Data System (ADS)

    Kapil, V.; VandeVondele, J.; Ceriotti, M.

    2016-02-01

    The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.

  2. Rapid and accurate identification of Mycobacterium tuberculosis complex and common non-tuberculous mycobacteria by multiplex real-time PCR targeting different housekeeping genes.

    PubMed

    Nasr Esfahani, Bahram; Rezaei Yazdi, Hadi; Moghim, Sharareh; Ghasemian Safaei, Hajieh; Zarkesh Esfahani, Hamid

    2012-11-01

    Rapid and accurate identification of mycobacteria isolates from primary culture is important due to timely and appropriate antibiotic therapy. Conventional methods for identification of Mycobacterium species based on biochemical tests needs several weeks and may remain inconclusive. In this study, a novel multiplex real-time PCR was developed for rapid identification of Mycobacterium genus, Mycobacterium tuberculosis complex (MTC) and the most common non-tuberculosis mycobacteria species including M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and the M. gordonae in three reaction tubes but under same PCR condition. Genetic targets for primer designing included the 16S rDNA gene, the dnaJ gene, the gyrB gene and internal transcribed spacer (ITS). Multiplex real-time PCR was setup with reference Mycobacterium strains and was subsequently tested with 66 clinical isolates. Results of multiplex real-time PCR were analyzed with melting curves and melting temperature (T (m)) of Mycobacterium genus, MTC, and each of non-tuberculosis Mycobacterium species were determined. Multiplex real-time PCR results were compared with amplification and sequencing of 16S-23S rDNA ITS for identification of Mycobacterium species. Sensitivity and specificity of designed primers were each 100 % for MTC, M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and M. gordonae. Sensitivity and specificity of designed primer for genus Mycobacterium was 96 and 100 %, respectively. According to the obtained results, we conclude that this multiplex real-time PCR with melting curve analysis and these novel primers can be used for rapid and accurate identification of genus Mycobacterium, MTC, and the most common non-tuberculosis Mycobacterium species.

  3. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  4. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    PubMed

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-08-25

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period.

  5. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction

    PubMed Central

    Lu, Y.; Rong, C.Z.; Zhao, J.Y.; Lao, X.J.; Xie, L.; Li, S.; Qin, X.

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  6. Wavelet analysis of molecular dynamics: Efficient extraction of time-frequency information in ultrafast optical processes

    SciTech Connect

    Prior, Javier; Castro, Enrique; Chin, Alex W.; Almeida, Javier; Huelga, Susana F.; Plenio, Martin B.

    2013-12-14

    New experimental techniques based on nonlinear ultrafast spectroscopies have been developed over the last few years, and have been demonstrated to provide powerful probes of quantum dynamics in different types of molecular aggregates, including both natural and artificial light harvesting complexes. Fourier transform-based spectroscopies have been particularly successful, yet “complete” spectral information normally necessitates the loss of all information on the temporal sequence of events in a signal. This information though is particularly important in transient or multi-stage processes, in which the spectral decomposition of the data evolves in time. By going through several examples of ultrafast quantum dynamics, we demonstrate that the use of wavelets provide an efficient and accurate way to simultaneously acquire both temporal and frequency information about a signal, and argue that this greatly aids the elucidation and interpretation of physical process responsible for non-stationary spectroscopic features, such as those encountered in coherent excitonic energy transport.

  7. Maintaining information online in discrete time; rethinking working memory processes.

    PubMed

    Stephane, Massoud

    2012-06-21

    Linguistic operations occur with verbal information maintained online for a discrete time. It is posited that online maintenance of information is accomplished by verbal working memory (WM), a system that is: (a) independent from the linguistic operations carried out with the information (specialized), and (b) consists of a holding place where information is held in a phonological code (phonological loop) and a rehearsal mechanism that refreshes the phonological loop. This model does not account for the serial position effects associated with information maintenance and additional models are needed to explain the latter effects, which leaves us with a disjointed understanding of online maintenance of information. In this study, 36 middle-aged, healthy subjects (33 males and 3 females) were required to maintain linguistic information (letters) online. The letters called upon different cognitive operations (orthographic; orthographic and phonetic; or orthographic, phonetic and semantic). It was found that online maintenance capacity depends on the cognitive operations associated with the letters and on their serial position. Additionally, the cognitive operation effect on online maintenance was modulated by the serial position. These data favor a model for WM consisting of a simple holding place where verbal information maintenance depends on what the information is used for. We will discuss an integrated model for online information maintenance that accounts for the serial position effects.

  8. Designing Information Measures for Real-time Lightcurve Classification

    NASA Astrophysics Data System (ADS)

    Jones, David Edward; Chen, Yang; Meng, Xiao-Li; Siemiginowska, Aneta; Kashyap, Vinay

    2016-01-01

    Since telescope time is limited, real-time lightcurve classification involves carefully selecting future time points at which sources must be observed in order to maximize the information that will be gained for classification. We propose a framework for constructing measures of information for testing/classification/model-selection and demonstrate their use in experimental design. Degroot (1962) developed a general framework for constructing Bayesian measures of the expected information that an experiment will provide for estimation, and our framework analogously constructs measures of information for hypothesis testing. Such test information measures are most useful for model selection and classification problems. Indeed, our framework suggests a probability based measure of test information, which in decision problems has more appealing properties than variance based measures. In the case of lightcurve classification, we adapt our designs to penalize long waits until the next observation time. Lastly, we consider ways to address other aspects of the problem, such as uncertainty estimation arising due to contamination from nearby contaminating sources or background diffuse emission. We acknowledge support from Smithsonian Competitive Grants Fund 40488100HH0043 and NSF grant DMS 1208791.

  9. Rényi’s information transfer between financial time series

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad

    2012-05-01

    In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.

  10. Information fusion control with time delay for smooth pursuit eye movement.

    PubMed

    Zhang, Menghua; Ma, Xin; Qin, Bin; Wang, Guangmao; Guo, Yanan; Xu, Zhigang; Wang, Yafang; Li, Yibin

    2016-05-01

    Smooth pursuit eye movement depends on prediction and learning, and is subject to time delays in the visual pathways. In this paper, an information fusion control method with time delay is presented, implementing smooth pursuit eye movement with prediction and learning as well as solving the problem of time delays in the visual pathways. By fusing the soft constraint information of the target trajectory of eyes and the ideal control strategy, and the hard constraint information of the eye system state equation and the output equation, optimal estimations of the co-state sequence and the control variable are obtained. The proposed control method can track not only constant velocity, sinusoidal target motion, but also arbitrary moving targets. Moreover, the absolute value of the retinal slip reaches steady state after 0.1 sec. Information fusion control method elegantly describes in a function manner how the brain may deal with arbitrary target velocities, how it implements the smooth pursuit eye movement with prediction, learning, and time delays. These two principles allowed us to accurately describe visually guided, predictive and learning smooth pursuit dynamics observed in a wide variety of tasks within a single theoretical framework. The tracking control performance of the proposed information fusion control with time delays is verified by numerical simulation results.

  11. Information fusion control with time delay for smooth pursuit eye movement.

    PubMed

    Zhang, Menghua; Ma, Xin; Qin, Bin; Wang, Guangmao; Guo, Yanan; Xu, Zhigang; Wang, Yafang; Li, Yibin

    2016-05-01

    Smooth pursuit eye movement depends on prediction and learning, and is subject to time delays in the visual pathways. In this paper, an information fusion control method with time delay is presented, implementing smooth pursuit eye movement with prediction and learning as well as solving the problem of time delays in the visual pathways. By fusing the soft constraint information of the target trajectory of eyes and the ideal control strategy, and the hard constraint information of the eye system state equation and the output equation, optimal estimations of the co-state sequence and the control variable are obtained. The proposed control method can track not only constant velocity, sinusoidal target motion, but also arbitrary moving targets. Moreover, the absolute value of the retinal slip reaches steady state after 0.1 sec. Information fusion control method elegantly describes in a function manner how the brain may deal with arbitrary target velocities, how it implements the smooth pursuit eye movement with prediction, learning, and time delays. These two principles allowed us to accurately describe visually guided, predictive and learning smooth pursuit dynamics observed in a wide variety of tasks within a single theoretical framework. The tracking control performance of the proposed information fusion control with time delays is verified by numerical simulation results. PMID:27230904

  12. Diurnal patterns of salivary cortisol and DHEA using a novel collection device: electronic monitoring confirms accurate recording of collection time using this device.

    PubMed

    Laudenslager, Mark L; Calderone, Jacqueline; Philips, Sam; Natvig, Crystal; Carlson, Nichole E

    2013-09-01

    The accurate indication of saliva collection time is important for defining the diurnal decline in salivary cortisol as well as characterizing the cortisol awakening response. We tested a convenient and novel collection device for collecting saliva on strips of filter paper in a specially constructed booklet for determination of both cortisol and DHEA. In the present study, 31 healthy adults (mean age 43.5 years) collected saliva samples four times a day on three consecutive days using filter paper collection devices (Saliva Procurement and Integrated Testing (SPIT) booklet) which were maintained during the collection period in a large plastic bottle with an electronic monitoring cap. Subjects were asked to collect saliva samples at awakening, 30 min after awakening, before lunch and 600 min after awakening. The time of awakening and the time of collection before lunch were allowed to vary by each subjects' schedule. A reliable relationship was observed between the time recorded by the subject directly on the booklet and the time recorded by electronic collection device (n=286 observations; r(2)=0.98). However, subjects did not consistently collect the saliva samples at the two specific times requested, 30 and 600 min after awakening. Both cortisol and DHEA revealed diurnal declines. In spite of variance in collection times at 30 min and 600 min after awakening, the slope of the diurnal decline in both salivary cortisol and DHEA was similar when we compared collection tolerances of ±7.5 and ±15 min for each steroid. These unique collection booklets proved to be a reliable method for recording collection times by subjects as well as for estimating diurnal salivary cortisol and DHEA patterns.

  13. Diurnal patterns of salivary cortisol and DHEA using a novel collection device: electronic monitoring confirms accurate recording of collection time using this device.

    PubMed

    Laudenslager, Mark L; Calderone, Jacqueline; Philips, Sam; Natvig, Crystal; Carlson, Nichole E

    2013-09-01

    The accurate indication of saliva collection time is important for defining the diurnal decline in salivary cortisol as well as characterizing the cortisol awakening response. We tested a convenient and novel collection device for collecting saliva on strips of filter paper in a specially constructed booklet for determination of both cortisol and DHEA. In the present study, 31 healthy adults (mean age 43.5 years) collected saliva samples four times a day on three consecutive days using filter paper collection devices (Saliva Procurement and Integrated Testing (SPIT) booklet) which were maintained during the collection period in a large plastic bottle with an electronic monitoring cap. Subjects were asked to collect saliva samples at awakening, 30 min after awakening, before lunch and 600 min after awakening. The time of awakening and the time of collection before lunch were allowed to vary by each subjects' schedule. A reliable relationship was observed between the time recorded by the subject directly on the booklet and the time recorded by electronic collection device (n=286 observations; r(2)=0.98). However, subjects did not consistently collect the saliva samples at the two specific times requested, 30 and 600 min after awakening. Both cortisol and DHEA revealed diurnal declines. In spite of variance in collection times at 30 min and 600 min after awakening, the slope of the diurnal decline in both salivary cortisol and DHEA was similar when we compared collection tolerances of ±7.5 and ±15 min for each steroid. These unique collection booklets proved to be a reliable method for recording collection times by subjects as well as for estimating diurnal salivary cortisol and DHEA patterns. PMID:23490073

  14. Rapid hologram updates for real-time volumetric information displays.

    PubMed

    Munjuluri, Bala; Huebschman, Michael L; Garner, Harold R

    2005-08-20

    We have demonstrated that holograms incorporating changes in three-dimensional (3D) scenes can be recalculated in real time to present dynamic updates on information displays. This approach displays 3D information in a compatible format for fast and reliable interpretation of changes in the 3D scenes. The rapid-update algorithm has been demonstrated by real-time computation and transcription of the holograms to our digital micromirror device hologram projection system for visual validation of the reconstruction. The reported algorithm enables full parallax 1024 x 768 pixel holograms of 3D scenes to be updated at a rate of 0.8 s with a 1.8 GHz personal computer. Volumetric information displays that can enhance reliable data assimilation and decrease reaction times for applications such as air-traffic control, cockpit heads-up displays, mission crew stations, and undersea navigation can benefit from this research.

  15. Dynamics of traffic flow with real-time traffic information

    NASA Astrophysics Data System (ADS)

    Yokoya, Yasushi

    2004-01-01

    We studied dynamics of traffic flow with real-time information provided. Provision of the real-time traffic information based on advancements in telecommunication technology is expected to facilitate the efficient utilization of available road capacity. This system has a potentiality of not only engineering for road usage but also the science of complexity series. In the system, the information plays a role of feedback connecting microscopic and macroscopic phenomena beyond the hierarchical structure of statistical physics. In this paper, we tried to clarify how the information works in a network of traffic flow from the perspective of statistical physics. The dynamical feature of the traffic flow is abstracted by a contrastive study between the nonequilibrium statistical physics and a computer simulation based on cellular automaton. We found that the information disrupts the local equilibrium of traffic flow by a characteristic dissipation process due to interaction between the information and individual vehicles. The dissipative structure was observed in the time evolution of traffic flow driven far from equilibrium as a consequence of the breakdown of the local-equilibrium hypothesis.

  16. Relations between information, time, and value of water

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.; Galindo, L. C.

    2015-12-01

    This research uses with stochastic dynamic programming (SDP) as a tool to reveal economic information about managed water resources. An application to the operation of an example hydropower reservoir is presented. SDP explicitly balances the marginal value of water for immediate use and its expected opportunity cost of not having more water available for future use. The result of an SDP analysis is a steady state policy, which gives the optimal decision as a function of the state. A commonly applied form gives the optimal release as a function of the month, current reservoir level and current inflow to the reservoir. The steady state policy can be complemented with a real-time management strategy, that can depend on more real-time information. An information-theoretical perspective is given on how this information influences the value of water, and how to deal with that influence in hydropower reservoir optimization. This results in some conjectures about how the information gain from real-time operation could affect the optimal long term policy. Another issue is the sharing of increased benefits that result from this information gain in a multi-objective setting. It is argued that this should be accounted for in negotiations about an operation policy.

  17. Metabolic profiling of yeast culture using gas chromatography coupled with orthogonal acceleration accurate mass time-of-flight mass spectrometry: application to biomarker discovery.

    PubMed

    Kondo, Elsuida; Marriott, Philip J; Parker, Rhiannon M; Kouremenos, Konstantinos A; Morrison, Paul; Adams, Mike

    2014-01-01

    Yeast and yeast cultures are frequently used as additives in diets of dairy cows. Beneficial effects from the inclusion of yeast culture in diets for dairy mammals have been reported, and the aim of this study was to develop a comprehensive analytical method for the accurate mass identification of the 'global' metabolites in order to differentiate a variety of yeasts at varying growth stages (Diamond V XP, Yea-Sacc and Levucell). Microwave-assisted derivatization for metabolic profiling is demonstrated through the analysis of differing yeast samples developed for cattle feed, which include a wide range of metabolites of interest covering a large range of compound classes. Accurate identification of the components was undertaken using GC-oa-ToFMS (gas chromatography-orthogonal acceleration-time-of-flight mass spectrometry), followed by principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) for data reduction and biomarker discovery. Semi-quantification (fold changes in relative peak areas) was reported for metabolites identified as possible discriminative biomarkers (p-value <0.05, fold change >2), including D-ribose (four fold decrease), myo-inositol (five fold increase), L-phenylalanine (three fold increase), glucopyranoside (two fold increase), fructose (three fold increase) and threitol (three fold increase) respectively. PMID:24356230

  18. Real Time Mash-Up of Earth Data and Information Through RSS Feed Correlation

    NASA Astrophysics Data System (ADS)

    Bingham, A.; Chung, N. T.; Roberts, J.; Stough, T.; Deen, R.

    2012-12-01

    The availability of timely and accurate information helps natural resource mangers to make informed decisions following the occurrence of an event. Data collected by earth observing instrumentation coupled with information and assessments provided by first responders, forecasters, eyewitnesses, reporters and experts help provide a complete picture of the extent and magnitude of the impact of the event. Much of the data and information are published immediately on the Internet, but it is impossible for an individual to sift through the disparate sources and extract those data and information relevant to a specific event. We solve this problem by correlating metadata contained in RSS, GeoRSS and DatacastingRSS feeds to form a single feed that references all data (satellite imagery, in situ measurements etc.) and information (news articles, photos, videos, blogs, reports, assessments etc.) related to an individual event of interest. In this paper, we will discuss our concept of cross-feed correlation for the purpose of classifying data and information by events. We will also describe the implementation of cross-feed correlation and show how software applications and decision support systems can leverage the technology to pull in data and information tailored to the needs of a specific user community.Even Viewer. Application showing a mashup of satellite data, forecast information, news articles and blogs categorized by natural event.

  19. Time and Category Information in Pattern-Based Codes

    PubMed Central

    Eyherabide, Hugo Gabriel; Samengo, Inés

    2010-01-01

    Sensory stimuli are usually composed of different features (the what) appearing at irregular times (the when). Neural responses often use spike patterns to represent sensory information. The what is hypothesized to be encoded in the identity of the elicited patterns (the pattern categories), and the when, in the time positions of patterns (the pattern timing). However, this standard view is oversimplified. In the real world, the what and the when might not be separable concepts, for instance, if they are correlated in the stimulus. In addition, neuronal dynamics can condition the pattern timing to be correlated with the pattern categories. Hence, timing and categories of patterns may not constitute independent channels of information. In this paper, we assess the role of spike patterns in the neural code, irrespective of the nature of the patterns. We first define information-theoretical quantities that allow us to quantify the information encoded by different aspects of the neural response. We also introduce the notion of synergy/redundancy between time positions and categories of patterns. We subsequently establish the relation between the what and the when in the stimulus with the timing and the categories of patterns. To that aim, we quantify the mutual information between different aspects of the stimulus and different aspects of the response. This formal framework allows us to determine the precise conditions under which the standard view holds, as well as the departures from this simple case. Finally, we study the capability of different response aspects to represent the what and the when in the neural response. PMID:21151371

  20. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  1. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information.

    PubMed

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan W; Jin, Ke; Du, Yingge; Neeway, James J; Ryan, Joseph V; Hu, Dehong; Zhang, Kelvin H L; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampilai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    The use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass (SON68) and layered hole-perovskite oxide thin films were selected as model systems because of their fundamental and practical significance. Our results show that high sputter rates and accurate interfacial information can be achieved simultaneously for argon cluster sputtering, whereas this is not the case for cesium and oxygen sputtering. Therefore, the implementation of an argon cluster sputtering source can significantly improve the analysis efficiency of insulating materials and, thus, can expand its applications to the study of glass corrosion, perovskite oxide thin film characterization, and many other systems of interest.

  2. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Accurate real-time ionospheric corrections as the key to extend the centimeter-error-level GNSS navigation at continental scale (WARTK)

    NASA Astrophysics Data System (ADS)

    Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.

    2007-05-01

    The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere

  4. Implementation of multiple information hiding & real-time extraction system

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Hyug; Kim, Jung-Jin; Cho, Byung-Chul; Lee, Maeng-Ho; Kim, Eun-Soo

    2002-12-01

    In this paper, a new optodigital multiple information hiding and real-time extraction system is suggested. In the process of multiple information hiding, stego keys are generated by combined use of PRS (pseudo-random sequence) and HM (Hadamard matrix) and then, they are used to hide multiple data in an arbitrary cover image without crosstalks. To extract multiple information hidden in the stego image in real-time, a new optical NJTC(nonlinear joint transform correlator)-based extraction system is introduced. In this optical extraction system, both the stego image and each of stego keys are placed at the input plane of the correlator and jointly Fourier transformed. And, the power spectrum of the jointly Fourier transformed signal is detected at the spatial frequency domain and inversely Fourier transformed again. Then, the final correlation peaks between them can be found in the correlation plane as an authentic signal. From good experimental results on multiple information hiding and optical extraction using Arabic numerials of "1", "2" and "3", a possibility of implementation of a new optodigital multiple information hiding and real-time extraction system is suggested.

  5. Improved hybrid information filtering based on limited time window

    NASA Astrophysics Data System (ADS)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  6. 75 FR 68418 - Real-Time System Management Information Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Transportation Officials (AASHTO), and the Northwest Passage Pooled Fund Study; the Intelligent Transportation... successful real-time information program. A Request for Comments was published on May 4, 2006, at 71 FR 26399... 14, 2009, at 74 FR 1993. The purpose was to propose the establishment of minimum parameters...

  7. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery.

  8. Fast and Accurate Data Extraction for Near Real-Time Registration of 3-D Ultrasound and Computed Tomography in Orthopedic Surgery.

    PubMed

    Brounstein, Anna; Hacihaliloglu, Ilker; Guy, Pierre; Hodgson, Antony; Abugharbieh, Rafeef

    2015-12-01

    Automatic, accurate and real-time registration is an important step in providing effective guidance and successful anatomic restoration in ultrasound (US)-based computer assisted orthopedic surgery. We propose a method in which local phase-based bone surfaces, extracted from intra-operative US data, are registered to pre-operatively segmented computed tomography data. Extracted bone surfaces are downsampled and reinforced with high curvature features. A novel hierarchical simplification algorithm is used to further optimize the point clouds. The final point clouds are represented as Gaussian mixture models and iteratively matched by minimizing the dissimilarity between them using an L2 metric. For 44 clinical data sets from 25 pelvic fracture patients and 49 phantom data sets, we report mean surface registration accuracies of 0.31 and 0.77 mm, respectively, with an average registration time of 1.41 s. Our results suggest the viability and potential of the chosen method for real-time intra-operative registration in orthopedic surgery. PMID:26365924

  9. An Advanced Real-Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Takahashi, I.; Nakamura, H.; Suzuki, W.; Kunugi, T.; Aoi, S.; Fujiwara, H.

    2015-12-01

    J-RISQ (Japan Real-time Information System for earthquake) has been developing in NIED for appropriate first-actions to big earthquakes. When an earthquake occurs, seismic intensities (SI) are calculated first at each observation station and sent to the Data Management Center in different timing. The system begins the first estimation when the number of the stations observing the SI of 2.5 or larger exceeds the threshold amount. It estimates SI distribution, exposed population and earthquake damage on buildings by using basic data for estimation, such as subsurface amplification factors, population, and building information. It has been accumulated in J-SHIS (Japan Seismic Information Station) developed by NIED, a public portal for seismic hazard information across Japan. The series of the estimation is performed for each 250m square mesh and finally the estimated data is converted into information for each municipality. Since October 2013, we have opened estimated SI, exposed population etc. to the public through the website by making full use of maps and tables.In the previous system, we sometimes could not inspect the information of the surrounding areas out of the range suffered from strong motions, or the details of the focusing areas, and could not confirm whether the present information was the latest or not without accessing the website. J-RISQ has been advanced by introducing the following functions to settle those problems and promote utilization in local areas or in personal levels. In addition, the website in English has been released.・It has become possible to focus on the specific areas and inspect enlarged information.・The estimated information can be downloaded in the form of KML.・The estimated information can be updated automatically and be provided as the latest one.・The newest information can be inspected by using RSS readers or browsers corresponding to RSS.・Exclusive pages for smartphones have been prepared.The information estimated

  10. Waiting time information services: how well do different statistics forecast a patient's wait?

    PubMed

    Cromwell, David A; Griffiths, David A

    2002-01-01

    This study investigates how accurately the waiting times of patients about to join a waiting list are predicted by the types of statistics disseminated via web-based waiting time information services. Data were collected at a public hospital in Sydney, Australia, on elective surgery activity and waiting list behaviour from July 1995 to June 1998. The data covered 46 surgeons in 10 surgical specialties. The accuracy of the tested statistics varied greatly, being affected more by the characteristics and behaviour of a surgeon's waiting list than by how the statistics were derived. For those surgeons whose waiting times were often over six months, commonly used statistics can be very poor at forecasting patient waiting times.

  11. DELIVERING TIMELY ENVIRONMENTAL INFORMATION TO YOUR COMMUNITY: THE BOULDER AREA SUSTAINABILITY INFORMATION NETWORK: OTHER

    EPA Science Inventory

    NRMRL-CIN-1577 Petersen*, D., Barber, L., Dilworth, G, Fiebelkorn, T., McCaffrey, M., Murphy, S., Rudkin, C., Scott, D., and Waterman, J. Delivering Timely Environmental Information to your Community: The Boulder Area Sustainability Information Network. EPA/625/C-01/010. The Te...

  12. Information transfer via implicit encoding with delay time modulation in a time-delay system

    NASA Astrophysics Data System (ADS)

    Kye, Won-Ho

    2012-08-01

    A new encoding scheme for information transfer with modulated delay time in a time-delay system is proposed. In the scheme, the message is implicitly encoded into the modulated delay time. The information transfer rate as a function of encoding redundancy in various noise scales is presented and it is analyzed that the implicit encoding scheme (IES) has stronger resistance against channel noise than the explicit encoding scheme (EES). In addition, its advantages in terms of secure communication and feasible applications are discussed.

  13. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    SciTech Connect

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  14. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  15. Percutaneous Radiofrequency Ablation of Osteoid Osteomas with Use of Real-Time Needle Guidance for Accurate Needle Placement: A Pilot Study

    SciTech Connect

    Busser, Wendy M. H. Hoogeveen, Yvonne L.; Veth, Rene P. H.; Schreuder, H. W. Bart; Balguid, Angelique; Renema, W. KlaasJan; SchultzeKool, Leo J.

    2011-02-15

    Purpose: To evaluate the accuracy and technical success of positioning a radiofrequency ablation (RFA) electrode in osteoid osteomas by use of a new real-time needle guidance technology combining cone-beam computed tomography (CT) and fluoroscopy. Materials and Methods: Percutaneous RFA of osteoid osteomas was performed in five patients (median age 18 years), under general anesthesia, with the use of cone-beam CT and fluoroscopic guidance for electrode positioning. The outcome parameters were technical success, meaning correct needle placement in the nidus; accuracy defined as the deviation (in mm) from the center of the nidus; and clinical outcome at follow-up. Results: In all five cases, positioning was possible within 3 mm of the determined target location (median nidus size 6.8 mm; range 5-10.2 mm). All procedures were technically successful. All patients were free of pain at clinical follow-up. No complications were observed. Conclusion: Real-time fluoroscopy needle guidance based on cone-beam CT is a useful tool to accurately position radiofrequency needles for minimally invasive treatment of osteoid osteomas.

  16. Wave-Based Turing Machine: Time Reversal and Information Erasing.

    PubMed

    Perrard, S; Fort, E; Couder, Y

    2016-08-26

    The investigation of dynamical systems has revealed a deep-rooted difference between waves and objects regarding temporal reversibility and particlelike objects. In nondissipative chaos, the dynamic of waves always remains time reversible, unlike that of particles. Here, we explore the dynamics of a wave-particle entity. It consists in a drop bouncing on a vibrated liquid bath, self-propelled and piloted by the surface waves it generates. This walker, in which there is an information exchange between the particle and the wave, can be analyzed in terms of a Turing machine with waves as the information repository. The experiments reveal that in this system, the drop can read information backwards while erasing it. The drop can thus backtrack on its previous trajectory. A transient temporal reversibility, restricted to the drop motion, is obtained in spite of the system being both dissipative and chaotic.

  17. Wave-Based Turing Machine: Time Reversal and Information Erasing

    NASA Astrophysics Data System (ADS)

    Perrard, S.; Fort, E.; Couder, Y.

    2016-08-01

    The investigation of dynamical systems has revealed a deep-rooted difference between waves and objects regarding temporal reversibility and particlelike objects. In nondissipative chaos, the dynamic of waves always remains time reversible, unlike that of particles. Here, we explore the dynamics of a wave-particle entity. It consists in a drop bouncing on a vibrated liquid bath, self-propelled and piloted by the surface waves it generates. This walker, in which there is an information exchange between the particle and the wave, can be analyzed in terms of a Turing machine with waves as the information repository. The experiments reveal that in this system, the drop can read information backwards while erasing it. The drop can thus backtrack on its previous trajectory. A transient temporal reversibility, restricted to the drop motion, is obtained in spite of the system being both dissipative and chaotic.

  18. Wave-Based Turing Machine: Time Reversal and Information Erasing.

    PubMed

    Perrard, S; Fort, E; Couder, Y

    2016-08-26

    The investigation of dynamical systems has revealed a deep-rooted difference between waves and objects regarding temporal reversibility and particlelike objects. In nondissipative chaos, the dynamic of waves always remains time reversible, unlike that of particles. Here, we explore the dynamics of a wave-particle entity. It consists in a drop bouncing on a vibrated liquid bath, self-propelled and piloted by the surface waves it generates. This walker, in which there is an information exchange between the particle and the wave, can be analyzed in terms of a Turing machine with waves as the information repository. The experiments reveal that in this system, the drop can read information backwards while erasing it. The drop can thus backtrack on its previous trajectory. A transient temporal reversibility, restricted to the drop motion, is obtained in spite of the system being both dissipative and chaotic. PMID:27610859

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Phylogenetic informativeness reconciles ray-finned fish molecular divergence times

    PubMed Central

    2014-01-01

    Background Discordance among individual molecular age estimates, or between molecular age estimates and the fossil record, is observed in many clades across the Tree of Life. This discordance is attributed to a variety of variables including calibration age uncertainty, calibration placement, nucleotide substitution rate heterogeneity, or the specified molecular clock model. However, the impact of changes in phylogenetic informativeness of individual genes over time on phylogenetic inferences is rarely analyzed. Using nuclear and mitochondrial sequence data for ray-finned fishes (Actinopterygii) as an example, we extend the utility of phylogenetic informativeness profiles to predict the time intervals when nucleotide substitution saturation results in discordance among molecular ages estimated. Results We demonstrate that even with identical calibration regimes and molecular clock methods, mitochondrial based molecular age estimates are systematically older than those estimated from nuclear sequences. This discordance is most severe for highly nested nodes corresponding to more recent (i.e., Jurassic-Recent) divergences. By removing data deemed saturated, we reconcile the competing age estimates and highlight that the older mtDNA based ages were driven by nucleotide saturation. Conclusions Homoplasious site patterns in a DNA sequence alignment can systematically bias molecular divergence time estimates. Our study demonstrates that PI profiles can provide a non-arbitrary criterion for data exclusion to mitigate the influence of homoplasy on time calibrated branch length estimates. Analyses of actinopterygian molecular clocks demonstrate that scrutiny of the time scale on which sequence data is informative is a fundamental, but generally overlooked, step in molecular divergence time estimation. PMID:25103329

  1. Modelling the world in real time: how robots engineer information.

    PubMed

    Davison, Andrew J

    2003-12-15

    Programming robots and other autonomous systems to interact with the world in real time is bringing into sharp focus general questions about representation, inference and understanding. These artificial agents use digital computation to interpret the data gleaned from sensors and produce decisions and actions to guide their future behaviour. In a physical system, however, finite computational resources unavoidably impose the need to approximate and make selective use of the information available to reach prompt deductions. Recent research has led to widespread adoption of the methodology of Bayesian inference, which provides the absolute framework to understand this process fully via modelling as informed, fully acknowledged approximation. The performance of modern systems has improved greatly on the heuristic methods of the early days of artificial intelligence. We discuss the general problem of real-time inference and computation, and draw on examples from recent research in computer vision and robotics: specifically visual tracking and simultaneous localization and mapping. PMID:14667303

  2. Modelling the world in real time: how robots engineer information.

    PubMed

    Davison, Andrew J

    2003-12-15

    Programming robots and other autonomous systems to interact with the world in real time is bringing into sharp focus general questions about representation, inference and understanding. These artificial agents use digital computation to interpret the data gleaned from sensors and produce decisions and actions to guide their future behaviour. In a physical system, however, finite computational resources unavoidably impose the need to approximate and make selective use of the information available to reach prompt deductions. Recent research has led to widespread adoption of the methodology of Bayesian inference, which provides the absolute framework to understand this process fully via modelling as informed, fully acknowledged approximation. The performance of modern systems has improved greatly on the heuristic methods of the early days of artificial intelligence. We discuss the general problem of real-time inference and computation, and draw on examples from recent research in computer vision and robotics: specifically visual tracking and simultaneous localization and mapping.

  3. Performances and recent evolutions of EMSC Real Time Information services

    NASA Astrophysics Data System (ADS)

    Mazet-Roux, G.; Godey, S.; Bossu, R.

    2009-04-01

    The EMSC (http://www.emsc-csem.org) operates Real Time Earthquake Information services for the public and the scientific community which aim at providing rapid and reliable information on the seismic-ity of the Euro-Mediterranean region and on significant earthquakes worldwide. These services are based on parametric data rapidly provided by 66 seismological networks which are automatically merged and processed at EMSC. A web page which is updated every minute displays a list and a map of the latest earthquakes as well as additional information like location maps, moment tensors solutions or past regional seismicity. Since 2004, the performances and the popularity of these services have dramatically increased. The number of messages received from the contributors and the number of published events have been multiplied by 2 since 2004 and by 1.6 since 2005 respectively. The web traffic and the numbers of users of the Earthquake Notification Service (ENS) have been multiplied by 15 and 7 respectively. In terms of performances of the ENS, the median dissemination time for Euro-Med events is minutes in 2008. In order to further improve its performances and especially the speed and robustness of the reception of real time data, EMSC has recently implemented a software named QWIDS (Quake Watch Information Distribution System) which provides a quick and robust data exchange system through permanent TCP connections. At the difference with emails that can sometimes be delayed or lost, QWIDS is an actual real time communication system that ensures the data delivery. In terms of hardware, EMSC imple-mented a high availability, dynamic load balancing, redundant and scalable web servers infrastructure, composed of two SUN T2000 and one F5 BIG-IP switch. This will allow coping with constantly increas-ing web traffic and the occurrence of huge peaks of traffic after widely felt earthquakes.

  4. Autonomous Information Fading and Provision to Achieve High Response Time in Distributed Information Systems

    NASA Astrophysics Data System (ADS)

    Lu, Xiaodong; Arfaoui, Helene; Mori, Kinji

    In highly dynamic electronic commerce environment, the need for adaptability and rapid response time to information service systems has become increasingly important. In order to cope with the continuously changing conditions of service provision and utilization, Faded Information Field (FIF) has been proposed. FIF is a distributed information service system architecture, sustained by push/pull mobile agents to bring high-assurance of services through a recursive demand-oriented provision of the most popular information closer to the users to make a tradeoff between the cost of information service allocation and access. In this paper, based on the analysis of the relationship that exists among the users distribution, information provision and access time, we propose the technology for FIF design to resolve the competing requirements of users and providers to improve users' access time. In addition, to achieve dynamic load balancing with changing users preference, the autonomous information reallocation technology is proposed. We proved the effectiveness of the proposed technology through the simulation and comparison with the conventional system.

  5. Readout of silicon strip detectors with position and timing information

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Irmler, C.; Pernicka, M.

    2009-01-01

    Low-noise front-end amplifiers for silicon strip detectors are already available for decades, providing excellent signal-to-noise ratio and thus very precise spatial resolution, but at the cost of a long shaping time in the microsecond range. Due to occupancy and pile-up issues, modern experiments need much faster electronics. With submicron ASICs, adequate readout and data processing, it is possible to obtain not only spatial hit data, but also accurate timing information—a feature which is rarely exploited so far. We present the concept of a silicon vertex detector readout system intended for an upgrade of the Belle experiment at KEK (Tsukuba, Japan). The APV25 front-end chip, originally developed for CMS at CERN, is used in a way where it delivers multiple samples along the shaped waveform, such that not only the analog pulse height, but also the timing of each particle hit can be determined. We developed a complete readout system including an FADC +Processor VME module which performs zero-suppression in FPGAs. The hit time measurement is also planned on the same module. As fast amplifiers are inherently more susceptible to noise, which largely depends on the load capacitance, the front-end chips should be located as close to the detector as possible. On the other hand, the material budget, especially in a low-energy electron-positron machine such as Belle, should be minimized. We tried to merge those demands with a fully functional "Flex_Module", where thinned APV25 readout chips are mounted on the silicon sensor.

  6. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    NASA Astrophysics Data System (ADS)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  7. Quantification of reverse transcriptase activity by real-time PCR as a fast and accurate method for titration of HIV, lenti- and retroviral vectors.

    PubMed

    Vermeire, Jolien; Naessens, Evelien; Vanderstraeten, Hanne; Landi, Alessia; Iannucci, Veronica; Van Nuffel, Anouk; Taghon, Tom; Pizzato, Massimo; Verhasselt, Bruno

    2012-01-01

    Quantification of retroviruses in cell culture supernatants and other biological preparations is required in a diverse spectrum of laboratories and applications. Methods based on antigen detection, such as p24 for HIV, or on genome detection are virus specific and sometimes suffer from a limited dynamic range of detection. In contrast, measurement of reverse transcriptase (RT) activity is a generic method which can be adapted for higher sensitivity using real-time PCR quantification (qPCR-based product-enhanced RT (PERT) assay). We present an evaluation of a modified SYBR Green I-based PERT assay (SG-PERT), using commercially available reagents such as MS2 RNA and ready-to-use qPCR mixes. This assay has a dynamic range of 7 logs, a sensitivity of 10 nU HIV-1 RT and outperforms p24 ELISA for HIV titer determination by lower inter-run variation, lower cost and higher linear range. The SG-PERT values correlate with transducing and infectious units in HIV-based viral vector and replication-competent HIV-1 preparations respectively. This assay can furthermore quantify Moloney Murine Leukemia Virus-derived vectors and can be performed on different instruments, such as Roche Lightcycler® 480 and Applied Biosystems ABI 7300. We consider this test to be an accurate, fast and relatively cheap method for retroviral quantification that is easily implemented for use in routine and research laboratories.

  8. Mitochondrial DNA as a non-invasive biomarker: accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias.

    PubMed

    Malik, Afshan N; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a "dilution bias" when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  9. Quantification of Reverse Transcriptase Activity by Real-Time PCR as a Fast and Accurate Method for Titration of HIV, Lenti- and Retroviral Vectors

    PubMed Central

    Vermeire, Jolien; Naessens, Evelien; Vanderstraeten, Hanne; Landi, Alessia; Iannucci, Veronica; Van Nuffel, Anouk; Taghon, Tom; Pizzato, Massimo; Verhasselt, Bruno

    2012-01-01

    Quantification of retroviruses in cell culture supernatants and other biological preparations is required in a diverse spectrum of laboratories and applications. Methods based on antigen detection, such as p24 for HIV, or on genome detection are virus specific and sometimes suffer from a limited dynamic range of detection. In contrast, measurement of reverse transcriptase (RT) activity is a generic method which can be adapted for higher sensitivity using real-time PCR quantification (qPCR-based product-enhanced RT (PERT) assay). We present an evaluation of a modified SYBR Green I-based PERT assay (SG-PERT), using commercially available reagents such as MS2 RNA and ready-to-use qPCR mixes. This assay has a dynamic range of 7 logs, a sensitivity of 10 nU HIV-1 RT and outperforms p24 ELISA for HIV titer determination by lower inter-run variation, lower cost and higher linear range. The SG-PERT values correlate with transducing and infectious units in HIV-based viral vector and replication-competent HIV-1 preparations respectively. This assay can furthermore quantify Moloney Murine Leukemia Virus-derived vectors and can be performed on different instruments, such as Roche Lightcycler® 480 and Applied Biosystems ABI 7300. We consider this test to be an accurate, fast and relatively cheap method for retroviral quantification that is easily implemented for use in routine and research laboratories. PMID:23227216

  10. Transit Light Curves with Finite Integration Time: Fisher Information Analysis

    NASA Astrophysics Data System (ADS)

    Price, Ellen M.; Rogers, Leslie A.

    2014-10-01

    Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/~eprice.

  11. Transit light curves with finite integration time: Fisher information analysis

    SciTech Connect

    Price, Ellen M.; Rogers, Leslie A.

    2014-10-10

    Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/∼eprice.

  12. Acting to gain information: Real-time reasoning meets real-time perception

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stan

    1994-01-01

    Recent advances in intelligent reactive systems suggest new approaches to the problem of deriving task-relevant information from perceptual systems in real time. The author will describe work in progress aimed at coupling intelligent control mechanisms to real-time perception systems, with special emphasis on frame rate visual measurement systems. A model for integrated reasoning and perception will be discussed, and recent progress in applying these ideas to problems of sensor utilization for efficient recognition and tracking will be described.

  13. Time scale construction from multiple sources of information (Invited)

    NASA Astrophysics Data System (ADS)

    Malinverno, A.

    2013-12-01

    Geological age estimates are provided by diverse chronometers, such as radiometric measurements, astrochronology, and the spacing of magnetic anomalies recorded on mid-ocean ridges by seafloor spreading. These age estimates are affected by errors that can be systematic (e.g., biased radiometric dates due to imperfect assumptions) or random (e.g., imprecise recording of astronomical cycles in sedimentary records). Whereas systematic errors can be reduced by improvements in technique and calibration, uncertainties due to random errors will always be present and need to be dealt with. A Bayesian framework can be used to construct an integrated time scale that is based on several uncertain sources of information. In this framework, each piece of data and the final time scale have an associated probability distribution that describes their uncertainty. The key calculation is to determine the uncertainty in the time scale from the uncertain data that constrain it. In practice, this calculation can be performed by Monte Carlo sampling. In Markov chain Monte Carlo algorithms, the time scale is iteratively perturbed and the perturbed time scale is accepted or rejected depending on how closely it fits the data. The final result is a large ensemble of possible time scales that are consistent with all the uncertain data; while the average of this ensemble defines a 'best' time scale, the ensemble variability quantifies the time scale uncertainty. An example of this approach is the M-sequence (Late Jurassic-Early Cretaceous, ~160-120 Ma) MHTC12 geomagnetic polarity time scale (GPTS) of Malinverno et al. (2012, J. Geophys. Res., B06104, doi:10.1029/2012JB009260). Previous GPTSs were constructed by interpolating between dated marine magnetic anomalies while assuming constant or smoothly varying spreading rates. These GPTSs were typically based on magnetic lineations from one or a few selected spreading centers, and an undesirable result is that they imply larger spreading rate

  14. The LINEAR Photometric Database: Time Domain Information for SDSS Objects

    NASA Astrophysics Data System (ADS)

    Veyette, Mark; Becker, A. C.; Bozic, H.; Carroll, P.; Champey, P.; Draper, Z.; Evans, N.; Filbrandt, A.; Fowler, J.; Gailey, J.; Galin, M.; Ivezic, Z.; Jennings, Z.; Kelley, J.; Kroflin, A.; Laws, C.; Lewarch, E.; Loebman, S.; Mayorga, L.; Mesaric, M.; Morgan, D. P.; Munk, P.; Oluseyi, H.; Palaversa, L.; Patel, M.; Ruzdjak, D.; Schmidt, S.; Sesar, B.; Srdoc, G.; Steakley, K.; Stuart, J. S.; Sudar, D.; Vrbanec, D.; Westman, D. B.; Wheaton, S.; Wozniak, P.

    2012-01-01

    We announce a public database of over 5 billion photometric measurements for about 25 million objects, mostly stars with V<18, obtained by the asteroid survey LINEAR (available through the SkyDot website, skydot.lanl.gov). With 200 observations per object on average, LINEAR data provide time domain information for the brightest 4 magnitudes of SDSS survey objects. By combining information from these databases we have selected and visually classified some 200,000 candidate variable stars. Guided by these classifications, we selected the largest available sample of candidate field SX Phe stars (blue straggler halo stars) and demonstrated its low contamination through follow up observations at a number of telescopes in Croatia and the U.S. We have also constructed samples of several thousand distant RR Lyrae stars, as well as several thousand eclipsing binary stars, and are currently investigating the statistical properties of these data.

  15. Timely and/or Controversial Information for Family Physicians.

    PubMed

    Bowman, Marjorie A; Neale, Anne Victoria; Seehusen, Dean A

    2015-01-01

    Plan to spend some time reading this information-dense issue with a large amount of new material and ideas. From the humanoid behavioral health coach to tackling the controversial topic of environmental causes of autism spectrum disorders, this issue encompasses a broad range of topics. New anticoagulants for an extremely common entity, atrial fibrillation, are discussed. Learn about the shocking increase in oropharyngeal cancers with a changing epidemiology: younger patients with a different clinical presentations. Researchers evaluate changes after new or revised guidelines. "Near miss" reporting can facilitate quality improvement. Pets can make humans ill, yet they are beloved and can improve the health of their human owners.

  16. Optimal Perceived Timing: Integrating Sensory Information with Dynamically Updated Expectations

    PubMed Central

    Di Luca, Massimiliano; Rhodes, Darren

    2016-01-01

    The environment has a temporal structure, and knowing when a stimulus will appear translates into increased perceptual performance. Here we investigated how the human brain exploits temporal regularity in stimulus sequences for perception. We find that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted. Stimuli presented earlier than expected are perceptually delayed, whereas stimuli presented on time and later than expected are perceptually accelerated. This result suggests that the brain regularizes slightly deviant stimuli with an asymmetry that leads to the perceptual acceleration of expected stimuli. We present a Bayesian model for the combination of dynamically-updated expectations, in the form of a priori probability of encountering future stimuli, with incoming sensory information. The asymmetries in the results are accounted for by the asymmetries in the distributions involved in the computational process. PMID:27385184

  17. Time course of visual information utilization during fixations in reading.

    PubMed

    Blanchard, H E; McConkie, G W; Zola, D; Wolverton, G S

    1984-02-01

    College students read short texts from a cathode-ray tube as their eye movements were being monitored. During selected fixations, the text was briefly masked and then it reappeared with one word changed. Subjects often were unaware that the word had changed. Sometimes they reported seeing the first presented word, sometimes the second presented word, and sometimes both. When only one word was reported, two factors were found to determine which one it was: the length of time a word was present during the fixation and the predictability of a word in its context. The results suggested that visual information is utilized for reading at a crucial period during the fixation and that this crucial period can occur at different times on different fixations. The pattern of responses suggested that the first letter of a word is not utilized before other letters and that letters are not scanned from left to right during a fixation.

  18. 78 FR 21246 - Definition of Factual Information and Time Limits for Submission of Factual Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ... Modification of Regulation Regarding the Extension of Time Limits, 78 FR 3367 (January 16, 2013). 18. Restrict... Factual Information, 77 FR 40534 (July 10, 2012) (Proposed Rule). The Proposed Rule explained the... Antidumping Duty Order, 77 FR 14493 (March 12, 2012) and accompanying Issues and Decision Memorandum at...

  19. Accurate Mass GC/LC-Quadrupole Time of Flight Mass Spectrometry Analysis of Fatty Acids and Triacylglycerols of Spicy Fruits from the Apiaceae Family

    PubMed Central

    Nguyen, Thao; Aparicio, Mario; Saleh, Mahmoud A.

    2016-01-01

    The triacylglycerol (TAG) structure and the regio-stereospecific distribution of fatty acids (FA) of seed oils from most of the Apiaceae family are not well documented. The TAG structure ultimately determines the final physical properties of the oils and the position of FAs in the TAG molecule affects the digestion; absorption and metabolism; and physical and technological properties of TAGs. Fixed oils from the fruits of dill (Anethum graveolens), caraway (Carum carvi), cumin (Cuminum cyminum), coriander (Coriandrum sativum), anise (Pimpinella anisum), carrot (Daucus carota), celery (Apium graveolens), fennel (Foeniculum vulgare), and Khella (Ammi visnaga), all from the Apiaceae family, were extracted at room temperature in chloroform/methanol (2:1 v/v) using percolators. Crude lipids were fractionated by solid phase extraction to separate neutral triacylglycerols (TAGs) from other lipids components. Neutral TAGs were subjected to transesterification process to convert them to their corresponding fatty acids methyl esters (FAMES) using 1% boron trifluoride (BF3) in methanol. FAMES were analyzed by gas chromatography-quadrupole time of flight (GC-QTOF) mass spectrometry. Triglycerides were analyzed using high performance liquid chromatography-quadrupole time of flight (LC-QTOF) mass spectrometry. Petroselinic acid was the major fatty acid in all samples ranging from 57% of the total fatty acids in caraway up to 82% in fennel. All samples contained palmitic (16:0), palmitoleic (C16:1n-9), stearic (C18:0), petroselinic (C18:1n-12), linoleic (C18:2n-6), linolinic (18:3n-3), and arachidic (C20:0) acids. TAG were analyzed using LC-QTOF for accurate mass identification and mass spectrometry/mass spectrometry (MS/MS) techniques for regiospesific elucidation of the identified TAGs. Five major TAGs were detected in all samples but with different relative concentrations in all of the tested samples. Several other TAGs were detected as minor components and were present in

  20. Accurate Mass GC/LC-Quadrupole Time of Flight Mass Spectrometry Analysis of Fatty Acids and Triacylglycerols of Spicy Fruits from the Apiaceae Family.

    PubMed

    Nguyen, Thao; Aparicio, Mario; Saleh, Mahmoud A

    2015-01-01

    The triacylglycerol (TAG) structure and the regio-stereospecific distribution of fatty acids (FA) of seed oils from most of the Apiaceae family are not well documented. The TAG structure ultimately determines the final physical properties of the oils and the position of FAs in the TAG molecule affects the digestion; absorption and metabolism; and physical and technological properties of TAGs. Fixed oils from the fruits of dill (Anethum graveolens), caraway (Carum carvi), cumin (Cuminum cyminum), coriander (Coriandrum sativum), anise (Pimpinella anisum), carrot (Daucus carota), celery (Apium graveolens), fennel (Foeniculum vulgare), and Khella (Ammi visnaga), all from the Apiaceae family, were extracted at room temperature in chloroform/methanol (2:1 v/v) using percolators. Crude lipids were fractionated by solid phase extraction to separate neutral triacylglycerols (TAGs) from other lipids components. Neutral TAGs were subjected to transesterification process to convert them to their corresponding fatty acids methyl esters (FAMES) using 1% boron trifluoride (BF₃) in methanol. FAMES were analyzed by gas chromatography-quadrupole time of flight (GC-QTOF) mass spectrometry. Triglycerides were analyzed using high performance liquid chromatography-quadrupole time of flight (LC-QTOF) mass spectrometry. Petroselinic acid was the major fatty acid in all samples ranging from 57% of the total fatty acids in caraway up to 82% in fennel. All samples contained palmitic (16:0), palmitoleic (C16:1n-9), stearic (C18:0), petroselinic (C18:1n-12), linoleic (C18:2n-6), linolinic (18:3n-3), and arachidic (C20:0) acids. TAG were analyzed using LC-QTOF for accurate mass identification and mass spectrometry/mass spectrometry (MS/MS) techniques for regiospesific elucidation of the identified TAGs. Five major TAGs were detected in all samples but with different relative concentrations in all of the tested samples. Several other TAGs were detected as minor components and were present in

  1. A Provenance Model for Real-Time Water Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Bai, Q.; Zednik, S.; Taylor, P.; Fox, P. A.; Taylor, K.; Kloppers, C.; Peters, C.; Terhorst, A.; West, P.; Compton, M.; Shu, Y.; Provenance Management Team

    2010-12-01

    Generating hydrological data products, such as flow forecasts, involves complex interactions among instruments, data simulation models, computational facilities and data providers. Correct interpretation of the data produced at various stages requires good understanding of how data was generated or processed. Provenance describes the lineage of a data product. Making provenance information accessible to hydrologists and decision makers not only helps to determine the data’s value, accuracy and authorship, but also enables users to determine the trustworthiness of the data product. In the water domain, WaterML2 [1] is an emerging standard which describes an information model and format for the publication of water observations data in XML. The W3C semantic sensor network incubator group (SSN-XG) [3] is producing ontologies for the description of sensor configurations. By integrating domain knowledge of this kind into the provenance information model, the integrated information model will enable water domain researchers and water resource managers to better analyse how observations and derived data products were generated. We first introduce the Proof Mark Language (PML2) [2], WaterML2 and the SSN-XG sensor ontology as the proposed provenance representation formalism. Then we describe some initial implementations how these standards could be integrated to represent the lineage of water information products. Finally we will highlight how the provenance model for a distributed real-time water information system assists the interpretation of the data product and establishing trust. Reference [1] Taylor, P., Walker, G., Valentine, D., Cox, Simon: WaterML2.0: Harmonising standards for water observation data. Geophysical Research Abstracts. Vol. 12. [2] da Silva, P.P., McGuinness, D.L., Fikes, R.: A proof markup language for semantic web services. Inf. Syst. 31(4) (2006), 381-395. [3] W3C Semantic Sensor Network Incubator Group http://www.w3.org/2005/Incubator

  2. Discovering Information Behavior in Sense Making: I. Time and Timing. [and] II. The Social. [and] III. The Person.

    ERIC Educational Resources Information Center

    Solomon, Paul

    1997-01-01

    Using the methods of ethnography of communication, this three-part study investigates time and timing, social elements, and individual information behavior in sense making. Argues that the sense-making concept captures how information behavior creates meaning and that a research focus on information and information-seeking divides people from…

  3. Science, VxOs and Just In Time Information

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Merka, J.; Bentley, R. D.; Roberts, A.; Rankin, R.; Candey, R. M.; Narock, T. W.

    2009-12-01

    The scientific method is a system for acquiring knowledge based on the collection of data through observation, experimentation and the integration of previous knowledge. This is followed by the formulation and testing of hypotheses resulting in new knowledge and possibly the correction previous knowledge. From a system design perspective the scientific method is a well defined system, use cases are abundant, requirements are readily accessible and guiding principles are fully articulated. With advancement of technology new implementations emerge to support and enable science. We are now in the age of Virtual Observatories where distributed data are coupled through services and well-defined metadata. The paradigm is one in which information is sought and retrieved just in time for its use. We discuss a system model for a Just In Time Information (JITI) system that addresses the clearly identified needs of scientists. It includes tasks such as coordinate system conversion, file format transformation, subsetting, aggregation, and rendering. We also discuss the discovery needs of the scientist which range from the initial discovery of available resources to complex scientific queries. Overall the system is composed of a collection of small services which are tied together on a task-by-task basis, similar to that of a workflow, but with distributed and loosely coupled components. In a JITI system each service is invoked as needed with unique resource identifiers passed as the common reference thread that enables the service integration. The services that are part of a JITI system can be utilized in a number of ways to implement portals, search engines, aggregators, and mash-ups. JITI-like systems are emerging in the Virtual Observatory communities. We look at NASA's Virtual Magnetospheric Observatory, the Heliophysics Event List Manager (HELM), Europe's HELIO project and Canada's CSSDP project as examples.

  4. Use of Real-time Satellite Rainfall Information in a Global Flood Estimation System

    NASA Astrophysics Data System (ADS)

    Adler, R. F.; Wu, H.; Tian, Y.

    2012-12-01

    (FAR) is ~ 0.6. Limitations in the flood calculations that are related to the satellite rainfall estimates include space and time resolution limitations and underestimation of shallow orographic and monsoon system rainfall. Recent cases from 2012, mainly over Asia, are discussed as examples of the utility of the output information and the importance of accurate rainfall input. These calculations in their current form provide information useful to national and international agencies in understanding the location, intensity, timeline and impact on populations of these significant hazard events. Improvements in precipitation information from the coming Global Precipitation Measurement (GPM) mission (2014) will improve the flood model calculations, and improvements in hydrological model resolution and water routing will improve the utility of the results.

  5. A conceptual framework for intelligent real-time information processing

    NASA Technical Reports Server (NTRS)

    Schudy, Robert

    1987-01-01

    By combining artificial intelligence concepts with the human information processing model of Rasmussen, a conceptual framework was developed for real time artificial intelligence systems which provides a foundation for system organization, control and validation. The approach is based on the description of system processing terms of an abstraction hierarchy of states of knowledge. The states of knowledge are organized along one dimension which corresponds to the extent to which the concepts are expressed in terms of the system inouts or in terms of the system response. Thus organized, the useful states form a generally triangular shape with the sensors and effectors forming the lower two vertices and the full evaluated set of courses of action the apex. Within the triangle boundaries are numerous processing paths which shortcut the detailed processing, by connecting incomplete levels of analysis to partially defined responses. Shortcuts at different levels of abstraction include reflexes, sensory motor control, rule based behavior, and satisficing. This approach was used in the design of a real time tactical decision aiding system, and in defining an intelligent aiding system for transport pilots.

  6. Advanced Fire Information System - A real time fire information system for Africa

    NASA Astrophysics Data System (ADS)

    Frost, P. E.; Roy, D. P.

    2012-12-01

    The Council for Scientific and Industrial Research (CSIR) lead by the Meraka Institute and supported by the South African National Space Agency (SANSA) developed the Advanced Fire Information System (AFIS) to provide near real time fire information to a variety of operational and science fire users including disaster managers, fire fighters, farmers and forest managers located across Southern and Eastern Africa. The AFIS combines satellite data with ground based observations and statistics and distributes the information via mobile phone technology. The system was launched in 2004, and Eskom (South Africa' and Africa's largest power utility) quickly became the biggest user and today more than 300 Eskom line managers and support staff receive cell phone and email fire alert messages whenever a wildfire is within 2km of any of the 28 000km of Eskom electricity transmission lines. The AFIS uses Earth observation satellites from NASA and Europe to detect possible actively burning fires and their fire radiative power (FRP). The polar orbiting MODIS Terra and Aqua satellites provide data at around 10am, 15pm, 22am and 3am daily, while the European Geostationary MSG satellite provides 15 minute updates at lower spatial resolution. The AFIS processing system ingests the raw satellite data and within minutes of the satellite overpass generates fire location and FRP based fire intensity information. The AFIS and new functionality are presented including an incident report and permiting system that can be used to differentiate between prescribed burns and uncontrolled wild fires, and the provision of other information including 5-day fire danger forecasts, vegetation curing information and historical burned area maps. A new AFIS mobile application for IOS and Android devices as well as a fire reporting tool are showcased that enable both the dissemination and alerting of fire information and enable user upload of geo tagged photographs and on the fly creation of fire reports

  7. Performance of Real-time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, H.; Horiuchi, S.; Wu, C.; Yamamoto, S.

    2008-12-01

    Horiuchi et al. (2005) developed a real-time earthquake information system (REIS) using Hi-net, a densely deployed nationwide seismic network, which consists of about 800 stations operated by NIED, Japan. REIS determines hypocenter locations and earthquake magnitudes automatically within a few seconds after P waves arrive at the closest station and calculates focal mechanisms within about 15 seconds. Obtained hypocenter parameters are transferred immediately by using XML format to a computer in Japan Meteorological Agency (JMA), who started the service of EEW to special users in June 2005. JMA also developed EEW using 200 stations. The results by the two systems are merged. Among all the first issued EEW reports by both systems, REIS information accounts for about 80 percent. This study examines the rapidity and credibility of REIS by analyzing the 4050 earthquakes which occurred around the Japan Islands since 2005 with magnitude larger than 3.0. REIS re-determines hypocenter parameters every one second according to the revision of waveform data. Here, we discuss only about the results by the first reports. On rapidness, our results show that about 44 percent of the first reports are issued within 5 seconds after the P waves arrives at the closest stations. Note that this 5-second time window includes time delay due to data package and transmission delay of about 2 seconds. REIS waits till two stations detect P waves for events in the network but four stations outside the network so as to get reliable solutions. For earthquakes with hypocentral distance less than 100km, 55 percent of earthquakes are warned in 5 seconds and 87 percent are warned in 10 seconds. Most of events having long time delay are small and triggered by S wave arrivals. About 80 percent of events have difference in epicenter distances less than 20km relative to JMA manually determined locations. Because of the existence of large lateral heterogeneity in seismic velocity, the difference depends

  8. The effect of chromatic and luminance information on reaction times.

    PubMed

    O'Donell, Beatriz M; Barraza, Jose F; Colombo, Elisa M

    2010-07-01

    We present a series of experiments exploring the effect of chromaticity on reaction time (RT) for a variety of stimulus conditions, including chromatic and luminance contrast, luminance, and size. The chromaticity of these stimuli was varied along a series of vectors in color space that included the two chromatic-opponent-cone axes, a red-green (L-M) axis and a blue-yellow [S - (L + M)] axis, and intermediate noncardinal orientations, as well as the luminance axis (L + M). For Weber luminance contrasts above 10-20%, RTs tend to the same asymptote, irrespective of chromatic direction. At lower luminance contrast, the addition of chromatic information shortens the RT. RTs are strongly influenced by stimulus size when the chromatic stimulus is modulated along the [S - (L + M)] pathway and by stimulus size and adaptation luminance for the (L-M) pathway. RTs are independent of stimulus size for stimuli larger than 0.5 deg. Data are modeled with a modified version of Pieron's formula with an exponent close to 2, in which the stimulus intensity term is replaced by a factor that considers the relative effects of chromatic and achromatic information, as indexed by the RMS (square-root of the cone contrast) value at isoluminance and the Weber luminance contrast, respectively. The parameters of the model reveal how RT is linked to stimulus size, chromatic channels, and adaptation luminance and how they can be interpreted in terms of two chromatic mechanisms. This equation predicts that, for isoluminance, RTs for a stimulus lying on the S-cone pathway are higher than those for a stimulus lying on the L-M-cone pathway, for a given RMS cone contrast. The equation also predicts an asymptotic trend to the RT for an achromatic stimulus when the luminance contrast is sufficiently large.

  9. How Have Concepts of Informal Learning Developed over Time?

    ERIC Educational Resources Information Center

    Carliner, Saul

    2013-01-01

    Although the current interest in informal learning seems recent, performance improvement professionals have long had an interest in informal learning-the ways that people learn outside of formal structures. The earliest forms of learning for work were informal, including de facto and formal apprenticeship programs and the "school of…

  10. Life, Information, Entropy, and Time: Vehicles for Semantic Inheritance.

    PubMed

    Crofts, Antony R

    2007-01-01

    Attempts to understand how information content can be included in an accounting of the energy flux of the biosphere have led to the conclusion that, in information transmission, one component, the semantic content, or "the meaning of the message," adds no thermodynamic burden over and above costs arising from coding, transmission and translation. In biology, semantic content has two major roles. For all life forms, the message of the genotype encoded in DNA specifies the phenotype, and hence the organism that is tested against the real world through the mechanisms of Darwinian evolution. For human beings, communication through language and similar abstractions provides an additional supra-phenotypic vehicle for semantic inheritance, which supports the cultural heritages around which civilizations revolve. The following three postulates provide the basis for discussion of a number of themes that demonstrate some important consequences. (i) Information transmission through either pathway has thermodynamic components associated with data storage and transmission. (ii) The semantic content adds no additional thermodynamic cost. (iii) For all semantic exchange, meaning is accessible only through translation and interpretation, and has a value only in context. (1) For both pathways of semantic inheritance, translational and copying machineries are imperfect. As a consequence both pathways are subject to mutation and to evolutionary pressure by selection. Recognition of semantic content as a common component allows an understanding of the relationship between genes and memes, and a reformulation of Universal Darwinism. (2) The emergent properties of life are dependent on a processing of semantic content. The translational steps allow amplification in complexity through combinatorial possibilities in space and time. Amplification depends on the increased potential for complexity opened by 3D interaction specificity of proteins, and on the selection of useful variants by

  11. Life, Information, Entropy, and Time: Vehicles for Semantic Inheritance.

    PubMed

    Crofts, Antony R

    2007-01-01

    Attempts to understand how information content can be included in an accounting of the energy flux of the biosphere have led to the conclusion that, in information transmission, one component, the semantic content, or "the meaning of the message," adds no thermodynamic burden over and above costs arising from coding, transmission and translation. In biology, semantic content has two major roles. For all life forms, the message of the genotype encoded in DNA specifies the phenotype, and hence the organism that is tested against the real world through the mechanisms of Darwinian evolution. For human beings, communication through language and similar abstractions provides an additional supra-phenotypic vehicle for semantic inheritance, which supports the cultural heritages around which civilizations revolve. The following three postulates provide the basis for discussion of a number of themes that demonstrate some important consequences. (i) Information transmission through either pathway has thermodynamic components associated with data storage and transmission. (ii) The semantic content adds no additional thermodynamic cost. (iii) For all semantic exchange, meaning is accessible only through translation and interpretation, and has a value only in context. (1) For both pathways of semantic inheritance, translational and copying machineries are imperfect. As a consequence both pathways are subject to mutation and to evolutionary pressure by selection. Recognition of semantic content as a common component allows an understanding of the relationship between genes and memes, and a reformulation of Universal Darwinism. (2) The emergent properties of life are dependent on a processing of semantic content. The translational steps allow amplification in complexity through combinatorial possibilities in space and time. Amplification depends on the increased potential for complexity opened by 3D interaction specificity of proteins, and on the selection of useful variants by

  12. Real-Time Traffic Information for Emergency Evacuation Operations: Phase A Final Report

    SciTech Connect

    Franzese, Oscar; Zhang, Li; Mahmoud, Anas M.; Lascurain, Mary Beth; Wen, Yi

    2010-05-01

    is also equipped with their own power supply and a GPS (Global Positioning System) device to auto-determine its spatial location on the transportation network under surveillance. The system is capable of assessing traffic parameters by identifying and re-identifying vehicles in the traffic stream as those vehicles pass over the sensors. The system of sensors transmits, through wireless communication, real-time traffic information (travel time and other parameters) to a command and control center via an NTCIP (National Transportation Communication for ITS Protocol) -compatible interface. As an alternative, an existing NTCIP-compatible system accepts the real-time traffic information mentioned and broadcasts the traffic information to emergency managers, the media and the public via the existing channels. A series of tests, both in a controlled environment and on the field, were conducted to study the feasibility of rapidly deploying the system of traffic sensors and to assess its ability to provide real-time traffic information during an emergency evacuation. The results of these tests indicated that the prototype sensors are reliable and accurate for the type of application that is the focus of this project.

  13. Pricing of Claims in Discrete Time with Partial Information

    SciTech Connect

    Rognlien Dahl, Kristina

    2013-10-15

    We consider the pricing problem of a seller with delayed price information. By using Lagrange duality, a dual problem is derived, and it is proved that there is no duality gap. This gives a characterization of the seller's price of a contingent claim. Finally, we analyze the dual problem, and compare the prices offered by two sellers with delayed and full information respectively.

  14. An Information Processing Approach to Skill Acquisition: Perception and Timing.

    ERIC Educational Resources Information Center

    Rothstein, Anne L.

    In order to understand learners and players in relation to environments typically found in sport, it is necessary to first understand the individual as an information processor who must sample information from the environment, interpret it, organize or select an appropriate motor response, and execute that response. One of the most difficult…

  15. The Soviet applied information sciences in a time of change

    SciTech Connect

    Bengston, J.; Cronin, R.R.; Davidson, R.B.

    1991-07-01

    The Foreign Applied Sciences Assessment Center (FASAC) conducts reviews of selected areas of foreign basic and applied science by US scientists who are technically expert and active in the fields reviewed. Several of the FASAC assessments of Soviet science have involved various aspects of the information sciences, including enabling technologies and applications, as well as the core information sciences. This report draws upon those FASAC assessment reports, the expert judgment of some of the authors of those reports, and other public sources to characterize the current state of the information sciences in the Soviet Union and the effects of information science capabilities upon other areas of Soviet science and technology. This report also provides estimates of the likely effect of the political and social reforms underway in the Soviet Union on future Soviet progress in the information sciences and, at a more general level, in science and technology. 41 refs., 7 tabs.

  16. Accurate measurement of the jitter time of GaAs photoconductive semiconductor switches triggered by a one-to-two optical fiber

    SciTech Connect

    Shi, Wei; Zhang, Lin; Gui, Huaimeng; Hou, Lei; Xu, Ming; Qu, Guanghui

    2013-04-15

    An improved method is proposed to measure the jitter time of the photoconductive semiconductor switches (PCSSs). A one-to-two fiber is utilized to separate and guide the 1053 nm laser beam to trigger two identical 3-mm-gap GaAs PCSSs synchronously. The jitter time is derived from the time lags of two switches turn-on by the error transfer theory. At a bias voltage of 1 kV, the jitter time is measured as 14.41 ps, which is the lowest jitter of GaAs PCSS that has been reported so far.

  17. Accurate mass determination, quantification and determination of detection limits in liquid chromatography-high-resolution time-of-flight mass spectrometry: challenges and practical solutions.

    PubMed

    Vergeynst, Leendert; Van Langenhove, Herman; Joos, Pieter; Demeestere, Kristof

    2013-07-30

    Uniform guidelines for the data processing and validation of qualitative and quantitative multi-residue analysis using full-spectrum high-resolution mass spectrometry are scarce. Through systematic research, optimal mass accuracy and sensitivity are obtained after refining the post-processing of the HRMS data. For qualitative analysis, transforming the raw profile spectra to centroid spectra is recommended resulting in a 2.3 fold improved precision on the accurate mass determination of spectrum peaks. However, processing centroid data for quantitative purposes could lead to signal interruption when too narrow mass windows are applied for the construction of extracted ion chromatograms. Therefore, peak integration on the raw profile data is recommended. An optimal width of the mass window of 50 ppm, which is a trade-off between sensitivity and selectivity, was obtained for a TOF instrument providing a resolving power of 20,000 at full width at half maximum (FWHM). For the validation of HRMS analytical methods, widespread concepts such as the signal-to-noise ratios for the determination of decision limits and detection capabilities have shown to be not always applicable because in some cases almost no noise can be detected anymore. A statistical methodology providing a reliable alternative is extended and applied. PMID:23856232

  18. Transfer of Timing Information from RGC to LGN Spike Trains

    NASA Astrophysics Data System (ADS)

    Teich, Malvin C.; Lowen, Steven B.; Saleh, Bahaa E. A.; Kaplan, Ehud

    1998-03-01

    We have studied the firing patterns of retinal ganglion cells (RGCs) and their target lateral geniculate nucleus (LGN) cells. We find that clusters of spikes in the RGC neural firing pattern appear at the LGN output essentially unchanged, while isolated RGC firing events are more likely to be eliminated; thus the LGN action-potential sequence is therefore not merely a randomly deleted version of the RGC spike train. Employing information-theoretic techniques we developed for point processes,(B. E. A. Saleh and M. C. Teich, Phys. Rev. Lett.) 58, 2656--2659 (1987). we are able to estimate the information efficiency of the LGN neuronal output --- the proportion of the variation in the LGN firing pattern that carries information about its associated RGC input. A suitably modified integrate-and-fire neural model reproduces both the enhanced clustering in the LGN data (which accounts for the increased coefficient of variation) and the measured value of information efficiency, as well as mimicking the results of other observed statistical measures. Reliable information transmission therefore coexists with fractal fluctuations, which appear in RGC and LGN firing patterns.(M. C. Teich, C. Heneghan, S. B. Lowen, T. Ozaki, and E. Kaplan, J. Opt. Soc. Am. A) 14, 529--546 (1997).

  19. Three-dimensional dose evaluation system using real-time wind field information for nuclear accidents in Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Jay; Lu, Chung-Hsin; Chang, Shu-Jun; Yang, Yung-Muh; Chang, Bor-Jing; Teng, Jen-Hsin

    2006-09-01

    In Taiwan, the three operating nuclear power plants are all built along the coast over complex terrain. Dose estimates after a nuclear accident with releases of radioactive materials, therefore, cannot be accurately calculated using simple dispersion models. We developed a three-dimensional dose evaluation system, which incorporates real-time prognostic wind field information with three-dimensional numerical models to predict dose results. The proposed system consists of three models: a three-dimensional mesoscale atmospheric model (HOTMAC), a three-dimensional transport and diffusion model (RAPTAD), and a dose calculation model (DOSE). The whole-body dose and thyroid dose as well as dose rates can be rapidly estimated and displayed on the three-dimensional terrain model constructed by satellite images. The developed three-dimensional dose evaluation system could accurately forecast the dose results and has been used in the annual nuclear emergency response exercise to provide suggestions for protective measures.

  20. INFORMED CONSENT: THE MEDICAL AND LEGAL CHALLENGE OF OUR TIME

    PubMed Central

    Séllos Simões, Luiz Carlos

    2015-01-01

    Objective: To assess the real importance of obtaining informed consent, through an appropriate form, and its role in the outcome from civil liability claims. Methods: The wordings of the current Brazilian law and jurisprudence were compared with rulings from the State Court of the State of Rio de Janeiro, in 269 civil liability claims against healthcare professionals and hospitals. Results: Favorable and unfavorable outcomes (i.e. acquittals and convictions) were compared, and possible variations in the verdicts were discussed in relation to whether informed consent forms had been filled out or not. Conclusions: Obtaining informed consent, by means of appropriate forms, is still not a widespread practice in the Brazilian healthcare or judicial systems. It is recommended that this practice be adopted in the manner described in this paper, since this is prescribed in Brazilian law. PMID:27022541

  1. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    PubMed

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.

  2. A Basis for Time and Cost Evaluation of Information Systems.

    ERIC Educational Resources Information Center

    Korfhage, R. R.; DeLutis, T. G.

    A general model for information storage and retrieval (IS&R) systems is proposed. The system is selected from the set of all available IS&R components. These components define the system's users and data sources; the hardware, software, and personnel performing the actual storage and retrieval activities; and the funder who acts as a filter in the…

  3. Effect of Information Load and Time on Observational Learning

    ERIC Educational Resources Information Center

    Breslin, Gavin; Hodges, Nicola J.; Williams, A. Mark

    2009-01-01

    We examined whether altering the amount of and moment when visual information is presented affected observational learning for participants practicing a bowling skill. On Day 1, four groups practiced a cricket bowling action. Three groups viewed a full-body point-light model, the model's bowling arm, or between-limb coordination of the model's…

  4. Effect of information load and time on observational learning.

    PubMed

    Breslin, Gavin; Hodges, Nicola J; Williams, A Mark

    2009-09-01

    We examined whether altering the amount of and moment when visual information is presented affected observational learning for participants practicing a bowling skill. On Day 1, four groups practiced a cricket bowling action. Three groups viewed a full-body point-light model, the model's bowling arm, or between-limb coordination of the model's left and right wrists only. Following retention tests on Day 2, all participants practiced after viewing a full-body display. Retention was again tested on Day 3. Bowling accuracy improved in all four practice groups. Kinematics of the bowling arm became more like the model for the full-body and intralimb groups only. All groups improved on measures of interlimb coordination. Visual search data indicated that participants mainly focused their gaze on the model's bowling arm. These data lead to the suggestion that viewing "end-effector" information (i.e., information pertaining to the bowling arm) is an important perceptual constraint early in observational learning. Implicit manipulations designed to increase attention to other sources of information did not facilitate the learning process.

  5. Evaluation of comprehensive two-dimensional gas chromatography with accurate mass time-of-flight mass spectrometry for the metabolic profiling of plant-fungus interaction in Aquilaria malaccensis.

    PubMed

    Wong, Yong Foo; Chin, Sung-Tong; Perlmutter, Patrick; Marriott, Philip J

    2015-03-27

    To explore the possible obligate interactions between the phytopathogenic fungus and Aquilaria malaccensis which result in generation of a complex array of secondary metabolites, we describe a comprehensive two-dimensional gas chromatography (GC × GC) method, coupled to accurate mass time-of-flight mass spectrometry (TOFMS) for the untargeted and comprehensive metabolic profiling of essential oils from naturally infected A. malaccensis trees. A polar/non-polar column configuration was employed, offering an improved separation pattern of components when compared to other column sets. Four different grades of the oils displayed quite different metabolic patterns, suggesting the evolution of a signalling relationship between the host tree (emergence of various phytoalexins) and fungi (activation of biotransformation). In total, ca. 550 peaks/metabolites were detected, of which tentative identification of 155 of these compounds was reported, representing between 20.1% and 53.0% of the total ion count. These are distributed over the chemical families of monoterpenic and sesquiterpenic hydrocarbons, oxygenated monoterpenes and sesquiterpenes (comprised of ketone, aldehyde, oxide, alcohol, lactone, keto-alcohol and diol), norterpenoids, diterpenoids, short chain glycols, carboxylic acids and others. The large number of metabolites detected, combined with the ease with which they are located in the 2D separation space, emphasises the importance of a comprehensive analytical approach for the phytochemical analysis of plant metabolomes. Furthermore, the potential of this methodology in grading agarwood oils by comparing the obtained metabolic profiles (pattern recognition for unique metabolite chemical families) is discussed. The phytocomplexity of the agarwood oils signified the production of a multitude of plant-fungus mediated secondary metabolites as chemical signals for natural ecological communication. To the best of our knowledge, this is the most complete

  6. Matrix-assisted laser desorption ionization-time of flight mass spectrometry can accurately differentiate Aeromonas dhakensis from A. hydrophila, A. caviae, and A. veronii.

    PubMed

    Chen, Po-Lin; Lee, Tai-Fen; Wu, Chi-Jung; Teng, Shih-Hua; Teng, Lee-Jene; Ko, Wen-Chien; Hsueh, Po-Ren

    2014-07-01

    Among 217 Aeromonas isolates identified by sequencing analysis of their rpoB genes, the accuracy rates of identification of A. dhakensis, A. hydrophila, A. veronii, and A. caviae were 96.7%, 90.0%, 96.7%, and 100.0%, respectively, by the cluster analysis of spectra generated by matrix-assisted laser desorption ionization-time of flight mass spectrometry.

  7. A 3D Time-Shared NOESY Experiment Designed to Provide Optimal Resolution for Accurate Assignment of NMR Distance Restraints in Large Proteins

    PubMed Central

    Mishra, Subrata H; Harden, Bradley J

    2014-01-01

    Structure determination of proteins by solution NMR has become an established method, but challenges increase steeply with the size of proteins. Notably spectral crowding and signal overlap impair the analysis of cross-peaks in NOESY spectra that provide distance restraints for structural models. An optimal spectral resolution can alleviate overlap but requires prohibitively long experimental time with existing methods. Here we present a time-shared 3D experiment optimized for large proteins that provides 15N and 13C dispersed NOESY spectra in a single measurement. NOESY correlations appear in the detected dimension and hence benefit from the highest resolution achievable of all dimensions without increase in experimental time. By design, this experiment is inherently optimal for non-uniform sampling acquisition when compared to current alternatives. Thus, 15N and 13C dispersed NOESY spectra with ultra-high resolution in all dimensions were acquired in parallel within about 4 days instead of 80 days for a 52 kDa monomeric protein at a concentration of 350 μM. PMID:25381567

  8. Development of a Rapid and Accurate Identification Method for Citrobacter Species Isolated from Pork Products Using a Matrix-Assisted Laser-Desorption Ionization Time-of-Flight Mass Spectrometry (MALDI-TOF MS).

    PubMed

    Kwak, Hye-Lim; Han, Sun-Kyung; Park, Sunghoon; Park, Si Hong; Shim, Jae-Yong; Oh, Mihwa; Ricke, Steven C; Kim, Hae-Yeong

    2015-09-01

    Previous detection methods for Citrobacter are considered time consuming and laborious. In this study, we have developed a rapid and accurate detection method for Citrobacter species in pork products, using matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS). A total of 35 Citrobacter strains were isolated from 30 pork products and identified by both MALDI-TOF MS and 16S rRNA gene sequencing approaches. All isolates were identified to the species level by the MALDI-TOF MS, while 16S rRNA gene sequencing results could not discriminate them clearly. These results confirmed that MALDI-TOF MS is a more accurate and rapid detection method for the identification of Citrobacter species.

  9. Information Carried by Population Spike Times in the Whisker Sensory Cortex can be Decoded Without Knowledge of Stimulus Time

    PubMed Central

    Panzeri, Stefano; Diamond, Mathew E.

    2010-01-01

    Computational analyses have revealed that precisely timed spikes emitted by somatosensory cortical neuronal populations encode basic stimulus features in the rat's whisker sensory system. Efficient spike time based decoding schemes both for the spatial location of a stimulus and for the kinetic features of complex whisker movements have been defined. To date, these decoding schemes have been based upon spike times referenced to an external temporal frame – the time of the stimulus itself. Such schemes are limited by the requirement of precise knowledge of the stimulus time signal, and it is not clear whether stimulus times are known to rats making sensory judgments. Here, we first review studies of the information obtained from spike timing referenced to the stimulus time. Then we explore new methods for extracting spike train information independently of any external temporal reference frame. These proposed methods are based on the detection of stimulus-dependent differences in the firing time within a neuronal population. We apply them to a data set using single-whisker stimulation in anesthetized rats and find that stimulus site can be decoded based on the millisecond-range relative differences in spike times even without knowledge of stimulus time. If spike counts alone are measured over tens or hundreds of milliseconds rather than milliseconds, such decoders are much less effective. These results suggest that decoding schemes based on millisecond-precise spike times are likely to subserve robust and information-rich transmission of information in the somatosensory system. PMID:21423503

  10. Delayed High School Starting Times. Information Capsule. Volume 0908

    ERIC Educational Resources Information Center

    Blazer, Christie

    2009-01-01

    Educators around the nation are considering pushing high school starting times back until later in the morning, based on evidence suggesting that amount of sleep and circadian rhythms play a part in adolescents' academic performance. While research confirms that adolescents do not get enough sleep and that insufficient sleep can negatively…

  11. Modeling Information Accumulation in Psychological Tests Using Item Response Times

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2015-01-01

    In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…

  12. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  13. Recognizing the link between CKD and CVD in the primary care setting: accurate and early diagnosis for timely and appropriate intervention.

    PubMed

    Basile, Jan N

    2007-05-01

    Chronic kidney disease (CKD), which is becoming increasingly prevalent in the US and worldwide, eventually progresses to end-stage renal disease (ESRD), requiring renal replacement therapy. Diabetes and hypertension, the two leading causes of CKD, are themselves reaching near epidemic proportions. Hypertension can cause both the development and progression of CKD, and CKD is a significant risk factor for the development of cardiovascular disease. Indeed, CKD patients are more likely to die of cardiovascular complications than progress to ESRD. However, data indicate that early recognition and management of CKD can have a significant positive impact on disease outcome. This creates an important interventional opportunity for the primary care physician. This report describes the major risk factors and comorbidities associated with the development and progression of CKD and offers suggestions for timely diagnosis and management of CKD in the primary care setting.

  14. Usage of Near Real-Time Climate Information.

    NASA Astrophysics Data System (ADS)

    Changnon, Stanley A., Jr.; Wendland, Wayne M.; Vogel, John L.

    1987-09-01

    Through a computer-based system, weather data from Illinois are collected daily, checked, and summarized into various climatic products within hours after collection. This system was controlled for two years to demonstrate system feasibility, determine user interest and product desires, and plan for statewide urge. This study focuses on usage of system products. The private sector (agribusiness news media, and private industry) was the most frequent and persistent user group, suggesting user-pay as a possible approach for funding such a system. State and federal agencies, farmers, and extension agents also use the system but primarily during weather stress periods. The characteristics of usage should help in the design and selection of products in other emerging state and regional systems for dissemination of climate data and information. The usage patterns also indicate how climate conditions impact various private and public sectors in a humid continental climate.

  15. Development of information-movement couplings in a rhythmical ball-bouncing task: from space- to time-related information.

    PubMed

    Bazile, C; Benguigui, N; Siegler, I A

    2016-01-01

    We studied the development of information-movement couplings in a ball-bouncing task with a special interest in how space- and time-related information is used by people of different ages. Participants from four age groups (children aged 7-8, 9-10 and 11-12 years, and adults) performed a virtual ball-bouncing task in which space- and time-related information were independently manipulated. Task performance and information-movement couplings were analyzed. Our results confirm a clear use of time-related information in adults, while children demonstrated a predominant relationship between space-related information and the period of movement. In the course of development, however, the children become progressively more capable of using time-related information in order to control the rhythmic ball-bouncing task. A second and weaker coupling, between ball height information and racket velocity at impact, also appears in the course of development. The data seem to show that the development of children follows the freezing-freeing-exploiting sequence proposed by Savelsbergh and Van der Kamp (Int J Sport Psychol 31:467-484, 2000), with a significant change in how information is used to control movement related to age.

  16. Accurate Detection and Quantification of the Fish Viral Hemorrhagic Septicemia virus (VHSv) with a Two-Color Fluorometric Real-Time PCR Assay

    PubMed Central

    Palsule, Vrushalee V.; Yeo, Jiyoun; Shepherd, Brian S.; Crawford, Erin L.; Stepien, Carol A.

    2013-01-01

    Viral Hemorrhagic Septicemia virus (VHSv) is one of the world's most serious fish pathogens, infecting >80 marine, freshwater, and estuarine fish species from Eurasia and North America. A novel and especially virulent strain – IVb – appeared in the Great Lakes in 2003, has killed many game fish species in a series of outbreaks in subsequent years, and shut down interstate transport of baitfish. Cell culture is the diagnostic method approved by the USDA-APHIS, which takes a month or longer, lacks sensitivity, and does not quantify the amount of virus. We thus present a novel, easy, rapid, and highly sensitive real-time quantitative reverse transcription PCR (qRT-PCR) assay that incorporates synthetic competitive template internal standards for quality control to circumvent false negative results. Results demonstrate high signal-to-analyte response (slope = 1.00±0.02) and a linear dynamic range that spans seven orders of magnitude (R2 = 0.99), ranging from 6 to 6,000,000 molecules. Infected fishes are found to harbor levels of virus that range to 1,200,000 VHSv molecules/106 actb1 molecules with 1,000 being a rough cut-off for clinical signs of disease. This new assay is rapid, inexpensive, and has significantly greater accuracy than other published qRT-PCR tests and traditional cell culture diagnostics. PMID:23977162

  17. Rapid and accurate detection of the CFTR gene mutation 1811+1.6 kbA>G by real-time fluorescence resonance energy transfer PCR.

    PubMed

    Reboul, Marie-Pierre; Higueret, Laurent; Biteau, Nicolas; Iron, Albert

    2005-10-01

    The CFTR gene mutation 1811+1.6 kbA>G has been reported as associated with a severe phenotype of cystic fibrosis with pancreatic insufficiency. This mutation has been identified as a rather common one in the South West of France and in the Iberian Peninsula. Because of the precise geographical origin of the subjects and its frequency, the mutation has to be investigated with accuracy. We have developed an original real-time Fluorescence Resonance Energy Transfer (FRET) PCR assay for genotyping the mutation 1811+1.6 kbA>G. It is based on the amplification of a region spanning the mutation with simultaneous detection of the amplicon by hybridization with a bi-probe followed by a melting curve analysis. The results obtained are identical with those resulting from either restriction fragment length polymorphism analysis or sequencing. The distinction between the wild type and the mutation 1811+1.6 kbA>G is easy because the corresponding melting points shows a difference of 6 or 9.5 degrees C depending on the associated SNP A/T located 16 bp downstream. We demonstrated that a FRET assay showed enough sensitivity to discriminate between two nucleotide polymorphisms (SNPs) in the sequence of the sensor. In conclusion, this method is specific, fast, easy to perform, reproducible, inexpensive as it uses only one bi-probe and well adapted to daily practice.

  18. Measurement of informal care: an empirical study into the valid measurement of time spent on informal caregiving.

    PubMed

    van den Berg, Bernard; Spauwen, Pol

    2006-05-01

    The incorporation of informal care into economic evaluations of health care is troublesome. The debate focuses on the valuation of time spent on informal caregiving, while time measurement, a related and may be even a more important issue, tends to be neglected. Valid time measurement is a necessary condition for the valuation of informal care. In this paper, two methods of time measurement are compared and evaluated: the diary, which is considered the gold standard, and the recall method, which is applied more often. The main objective of this comparison is to explore the validity of the measurement of time spent on providing informal care. In addition, this paper gives empirical evidence regarding the measurement of joint production and the separation between 'normal' housework and additional housework due to the care demands of the care recipients. Finally, the test-retest stability for the recall method is assessed. A total of 199 persons giving informal care to a heterogeneous population of care recipients completed the diary and the recall questionnaire. Corrected for joint production, informal caregivers spent almost 5.8 h a day on providing informal care. If one assumes that respondents take into account joint production when completing the recall questionnaire, the recall method is a valid instrument to measure time spent on providing informal care compared to the diary. Otherwise, the recall method is likely to overestimate the time spent on providing informal care. Moreover, the recall method proves to be unstable over time. This could be due to learning effects from completing a diary.

  19. Rapid and accurate detection of bacteriophage activity against Escherichia coli O157:H7 by propidium monoazide real-time PCR.

    PubMed

    Liu, Hui; Niu, Yan D; Li, Jinquan; Stanford, Kim; McAllister, Tim A

    2014-01-01

    Conventional methods to determine the efficacy of bacteriophage (phage) for biocontrol of E. coli require several days, due to the need to culture bacteria. Furthermore, cell surface-attached phage particles may lyse bacterial cells during experiments, leading to an overestimation of phage activity. DNA-based real-time quantitative polymerase chain reaction (qPCR) is a fast, sensitive, and highly specific means of enumerating pathogens. However, qPCR may underestimate phage activity due to its inability to distinguish viable from nonviable cells. In this study, we evaluated the suitability of propidium monoazide (PMA), a microbial membrane-impermeable dye that inhibits amplification of extracellular DNA and DNA within dead or membrane-compromised cells as a means of using qPCR to identify only intact E. coli cells that survive phage exposure. Escherichia coli O157:H7 strain R508N and 4 phages (T5-like, T1-like, T4-like, and O1-like) were studied. Results compared PMA-qPCR and direct plating and confirmed that PMA could successfully inhibit amplification of DNA from compromised/damaged cells E. coli O157:H7. Compared to PMA-qPCR, direct plating overestimated (P < 0.01) phage efficacy as cell surface-attached phage particles lysed E. coli O157:H7 during the plating process. Treatment of samples with PMA in combination with qPCR can therefore be considered beneficial when assessing the efficacy of bacteriophage for biocontrol of E. coli O157:H7.

  20. A system for accurate and automated injection of hyperpolarized substrate with minimal dead time and scalable volumes over a large range☆

    PubMed Central

    Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N.J.

    2014-01-01

    Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3–4 s post-injection trigger signal and at 9–12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump. PMID:24355621

  1. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    PubMed

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  2. A system for accurate and automated injection of hyperpolarized substrate with minimal dead time and scalable volumes over a large range

    NASA Astrophysics Data System (ADS)

    Reynolds, Steven; Bucur, Adriana; Port, Michael; Alizadeh, Tooba; Kazan, Samira M.; Tozer, Gillian M.; Paley, Martyn N. J.

    2014-02-01

    Over recent years hyperpolarization by dissolution dynamic nuclear polarization has become an established technique for studying metabolism in vivo in animal models. Temporal signal plots obtained from the injected metabolite and daughter products, e.g. pyruvate and lactate, can be fitted to compartmental models to estimate kinetic rate constants. Modeling and physiological parameter estimation can be made more robust by consistent and reproducible injections through automation. An injection system previously developed by us was limited in the injectable volume to between 0.6 and 2.4 ml and injection was delayed due to a required syringe filling step. An improved MR-compatible injector system has been developed that measures the pH of injected substrate, uses flow control to reduce dead volume within the injection cannula and can be operated over a larger volume range. The delay time to injection has been minimized by removing the syringe filling step by use of a peristaltic pump. For 100 μl to 10.000 ml, the volume range typically used for mice to rabbits, the average delivered volume was 97.8% of the demand volume. The standard deviation of delivered volumes was 7 μl for 100 μl and 20 μl for 10.000 ml demand volumes (mean S.D. was 9 ul in this range). In three repeat injections through a fixed 0.96 mm O.D. tube the coefficient of variation for the area under the curve was 2%. For in vivo injections of hyperpolarized pyruvate in tumor-bearing rats, signal was first detected in the input femoral vein cannula at 3-4 s post-injection trigger signal and at 9-12 s in tumor tissue. The pH of the injected pyruvate was 7.1 ± 0.3 (mean ± S.D., n = 10). For small injection volumes, e.g. less than 100 μl, the internal diameter of the tubing contained within the peristaltic pump could be reduced to improve accuracy. Larger injection volumes are limited only by the size of the receiving vessel connected to the pump.

  3. 76 FR 15052 - Proposed Information Collection (Time Record (Work-Study Program); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-18

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS AFFAIRS Proposed Information Collection (Time Record (Work-Study Program); Comment Request AGENCY... of automated collection techniques or the use of other forms of information technology. Title:...

  4. 77 FR 8324 - Applications for the Environment: Real-Time Information Synthesis (AERIS) User Needs Workshop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-14

    ... Applications for the Environment: Real-Time Information Synthesis (AERIS) User Needs Workshop; Notice of Public...: Real- Time Information Synthesis (AERIS) Program and solicit user needs for its Transformative Concepts... program is to generate and acquire environmentally-relevant real-time transportation data, and use...

  5. 23 CFR 511.313 - Metropolitan Area real-time information program supplement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Metropolitan Area real-time information program supplement. 511.313 Section 511.313 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT REAL-TIME SYSTEM MANAGEMENT INFORMATION PROGRAM Real-Time...

  6. 23 CFR 511.313 - Metropolitan Area real-time information program supplement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Metropolitan Area real-time information program supplement. 511.313 Section 511.313 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT REAL-TIME SYSTEM MANAGEMENT INFORMATION PROGRAM Real-Time...

  7. 23 CFR 511.313 - Metropolitan Area real-time information program supplement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Metropolitan Area real-time information program supplement. 511.313 Section 511.313 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT REAL-TIME SYSTEM MANAGEMENT INFORMATION PROGRAM Real-Time...

  8. 23 CFR 511.313 - Metropolitan Area real-time information program supplement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Metropolitan Area real-time information program supplement. 511.313 Section 511.313 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT REAL-TIME SYSTEM MANAGEMENT INFORMATION PROGRAM Real-Time...

  9. Hydrograph structure informed calibration in the frequency domain with time localization

    NASA Astrophysics Data System (ADS)

    Kumarasamy, K.; Belmont, P.

    2015-12-01

    Complex models with large number of parameters are commonly used to estimate sediment yields and predict changes in sediment loads as a result of changes in management or conservation practice at large watershed (>2000 km2) scales. As sediment yield is a strongly non-linear function that responds to channel (peak or mean) velocity or flow depth, it is critical to accurately represent flows. The process of calibration in such models (e.g., SWAT) generally involves the adjustment of several parameters to obtain better estimates of goodness of fit metrics such as Nash Sutcliff Efficiency (NSE). However, such indicators only provide a global view of model performance, potentially obscuring accuracy of the timing or magnitude of specific flows of interest. We describe an approach for streamflow calibration that will greatly reduce the black-box nature of calibration, when response from a parameter adjustment is not clearly known. Fourier Transform or the Short Term Fourier Transform could be used to characterize model performance in the frequency domain as well, however, the ambiguity of a Fourier transform with regards to time localization renders its implementation in a model calibration setting rather useless. Brief and sudden changes (e.g. stream flow peaks) in signals carry the most interesting information from parameter adjustments, which are completely lost in the transform without time localization. Wavelet transform captures the frequency component in the signal without compromising time and is applied to contrast changes in signal response to parameter adjustments. Here we employ the mother wavelet called the Mexican hat wavelet and apply a Continuous Wavelet Transform to understand the signal in the frequency domain. Further, with the use of the cross-wavelet spectrum we examine the relationship between the two signals (prior or post parameter adjustment) in the time-scale plane (e.g., lower scales correspond to higher frequencies). The non-stationarity of

  10. Intelligent Information Retrieval: Part IV. Testing the Timing of Two Information Retrieval Devices in a Naturalistic Setting.

    ERIC Educational Resources Information Center

    Cole, Charles

    2001-01-01

    Reports the results of two studies of undergraduates that tested an uncertainty expansion information retrieval device and an uncertainty reduction device in naturalistic settings, designed to be given at different stages of Kuhlthau's information search process. Concludes that the timing of the device interventions is crucial to their potential…

  11. Linear and nonlinear information flow based on time delayed mutual information method and its application to corticomuscular interaction

    PubMed Central

    Jin, Seung-Hyun; Lin, Peter; Hallett, Mark

    2010-01-01

    Objective To propose a model-free method to show linear and nonlinear information flow based on time delayed mutual information (TDMI) by employing uni- and bi-variate surrogate tests and to investigate whether there are contributions of the nonlinear information flow in corticomuscular (CM) interaction. Methods Using simulated data, we tested whether our method would successfully detect the direction of information flow and identify a relationship between two simulated time series. As an experimental data application, we applied this method to investigate CM interaction during a right wrist extension task. Results Results of simulation tests show that we can correctly detect the direction of information flow and the relationship between two time series without a prior knowledge of the dynamics of their generating systems. As experimental results, we found both linear and nonlinear information flow from contralateral sensorimotor cortex to muscle. Conclusions Our method is a viable model-free measure of temporally varying causal interactions that is capable of distinguishing linear and nonlinear information flow. With respect to experimental application, there are both linear and nonlinear information flows in CM interaction from contralateral sensorimotor cortex to muscle, which may reflect the motor command from brain to muscle. Significance This is the first study to show separate linear and nonlinear information flow in CM interaction. PMID:20044309

  12. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  13. Argon Cluster Sputtering Source for ToF-SIMS Depth Profiling of Insulating Materials: High Sputter Rate and Accurate Interfacial Information

    SciTech Connect

    Wang, Zhaoying; Liu, Bingwen; Zhao, Evan; Jin, Ke; Du, Yingge; Neeway, James J.; Ryan, Joseph V.; Hu, Dehong; Zhang, Hongliang; Hong, Mina; Le Guernic, Solenne; Thevuthasan, Suntharampillai; Wang, Fuyi; Zhu, Zihua

    2015-08-01

    For the first time, the use of an argon cluster ion sputtering source has been demonstrated to perform superiorly relative to traditional oxygen and cesium ion sputtering sources for ToF-SIMS depth profiling of insulating materials. The superior performance has been attributed to effective alleviation of surface charging. A simulated nuclear waste glass, SON68, and layered hole-perovskite oxide thin films were selected as model systems due to their fundamental and practical significance. Our study shows that if the size of analysis areas is same, the highest sputter rate of argon cluster sputtering can be 2-3 times faster than the highest sputter rates of oxygen or cesium sputtering. More importantly, high quality data and high sputter rates can be achieved simultaneously for argon cluster sputtering while this is not the case for cesium and oxygen sputtering. Therefore, for deep depth profiling of insulating samples, the measurement efficiency of argon cluster sputtering can be about 6-15 times better than traditional cesium and oxygen sputtering. Moreover, for a SrTiO3/SrCrO3 bi-layer thin film on a SrTiO3 substrate, the true 18O/16O isotopic distribution at the interface is better revealed when using the argon cluster sputtering source. Therefore, the implementation of an argon cluster sputtering source can significantly improve the measurement efficiency of insulating materials, and thus can expand the application of ToF-SIMS to the study of glass corrosion, perovskite oxide thin films, and many other potential systems.

  14. Time course of information representation of macaque AIP neurons in hand manipulation task revealed by information analysis.

    PubMed

    Sakaguchi, Yutaka; Ishida, Fumihiko; Shimizu, Takashi; Murata, Akira

    2010-12-01

    We used mutual information analysis of neuronal activity in the macaque anterior intraparietal area (AIP) to examine information processing during a hand manipulation task. The task was to reach-to-grasp a three-dimensional (3D) object after presentation of a go signal. Mutual information was calculated between the spike counts of individual neurons in 50-ms-wide time bins and six unique shape classifications or 15 one-versus-one classifications of these shapes. The spatiotemporal distribution of mutual information was visualized as a two-dimensional image ("information map") to better observe global profiles of information representation. In addition, a nonnegative matrix factorization technique was applied for extracting its structure. Our major finding was that the time course of mutual information differed significantly according to different classes of task-related neurons. This strongly suggests that different classes of neurons were engaged in different information processing stages in executing the hand manipulation task. On the other hand, our analysis revealed the heterogeneous nature of information representation of AIP neurons. For example, "information latency" (or information onset) varied among individual neurons even in the same neuron class and the same shape classification. Further, some neurons changed "information preference" (i.e., shape classification with the largest amount of information) across different task periods. These suggest that neurons encode different information in the different task periods. Taking the present result together with previous findings, we used a Gantt chart to propose a hypothetical scheme of the dynamic interactions between different types of AIP neurons.

  15. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  16. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  17. DELIVERING TIMELY ENVIRONMENTAL INFORMATION TO YOUR COMMUNITY: THE BOULDER AREA SUSTAINABILITY INFORMATION NETWORK

    EPA Science Inventory

    The Technology Transfer and Support Division of the EPA Office of Research and Development's (ORD's) National Risk Management Laboratory in conjunction with the Boulder Area Sustainability Information Network (BASIN) has developed a "how-to" handbook to allow other community orga...

  18. Identifying the Critical Time Period for Information Extraction when Recognizing Sequences of Play

    ERIC Educational Resources Information Center

    North, Jamie S.; Williams, A. Mark

    2008-01-01

    The authors attempted to determine the critical time period for information extraction when recognizing play sequences in soccer. Although efforts have been made to identify the perceptual information underpinning such decisions, no researchers have attempted to determine "when" this information may be extracted from the display. The authors…

  19. Creating Trails: Tool for Real-Time Assessment of Information Literacy Skills

    ERIC Educational Resources Information Center

    Schloman, Barbara F.; Gedeon, Julie A.

    2007-01-01

    This article describes Tool for Real-time Assessment of Information Literacy Skills (TRAILS), a freely available, online tool designed to measure the information literacy skills of high school students. It is based on information literacy competencies for ninth-graders found in the "Ohio Academic Content Standards" (Ohio Department of Education…

  20. Online Discussion Compensates for Suboptimal Timing of Supportive Information Presentation in a Digitally Supported Learning Environment

    ERIC Educational Resources Information Center

    Noroozi, Omid; Busstra, Maria C.; Mulder, Martin; Biemans, Harm J. A.; Tobi, Hilde; Geelen, Anouk; van't Veer, Pieter; Chizari, Mohammad

    2012-01-01

    This study used a sequential set-up to investigate the consecutive effects of timing of supportive information presentation (information before vs. information during the learning task clusters) in interactive digital learning materials (IDLMs) and type of collaboration (personal discussion vs. online discussion) in computer-supported…

  1. Cognitive tasks in information analysis: Use of event dwell time to characterize component activities

    SciTech Connect

    Sanquist, Thomas F.; Greitzer, Frank L.; Slavich, Antoinette L.; Littlefield, Rik J.; Littlefield, Janis S.; Cowley, Paula J.

    2004-09-28

    Technology-based enhancement of information analysis requires a detailed understanding of the cognitive tasks involved in the process. The information search and report production tasks of the information analysis process were investigated through evaluation of time-stamped workstation data gathered with custom software. Model tasks simulated the search and production activities, and a sample of actual analyst data were also evaluated. Task event durations were calculated on the basis of millisecond-level time stamps, and distributions were plotted for analysis. The data indicate that task event time shows a cyclic pattern of variation, with shorter event durations (< 2 sec) reflecting information search and filtering, and longer event durations (> 10 sec) reflecting information evaluation. Application of cognitive principles to the interpretation of task event time data provides a basis for developing “cognitive signatures” of complex activities, and can facilitate the development of technology aids for information intensive tasks.

  2. 76 FR 31682 - Agency Information Collection (Time Record (Work-Study Program)) Activity Under OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-01

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS AFFAIRS Agency Information Collection (Time Record (Work-Study Program)) Activity Under OMB Review AGENCY... INFORMATION: Title: Time Record (Work-Study Program), VA Form 22-8690. OMB Control Number: 2900-0379. Type...

  3. 12 CFR 1260.4 - Timing and form of information distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... 1260.4 Section 1260.4 Banks and Banking FEDERAL HOUSING FINANCE AGENCY FEDERAL HOME LOAN BANKS SHARING OF INFORMATION AMONG FEDERAL HOME LOAN BANKS (Eff. 1-6-14) § 1260.4 Timing and form of information... issued under § 1260.2(b) after the expiration of the applicable time period specified in §...

  4. 21 CFR 830.330 - Times for submission of unique device identification information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Times for submission of unique device identification information. 830.330 Section 830.330 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... Identification Database § 830.330 Times for submission of unique device identification information. (a)...

  5. Extracting information masked by the chaotic signal of a time-delay system.

    PubMed

    Ponomarenko, V I; Prokhorov, M D

    2002-08-01

    We further develop the method proposed by Bezruchko et al. [Phys. Rev. E 64, 056216 (2001)] for the estimation of the parameters of time-delay systems from time series. Using this method we demonstrate a possibility of message extraction for a communication system with nonlinear mixing of information signal and chaotic signal of the time-delay system. The message extraction procedure is illustrated using both numerical and experimental data and different kinds of information signals.

  6. Time to care? Health of informal older carers and time spent on health related activities: an Australian survey

    PubMed Central

    2013-01-01

    Background Little is known about the time spent on specific health related activities by older adult informal carers who assist people with chronic illness. Research has not yet addressed the association between carer health status and their care demands. Such information could inform policy and health system efforts to manage chronic illness. Methods We conducted an Australia wide survey using recall questionnaires to record time use. The study asked how much time is spent on “most days” for the most common activities like taking medication, self-treatment and testing, and how much time in the last month on less common activities like attending a physician or shopping associated with health needs. The survey was mailed to 5,000 members of National Seniors Australia; 2,500 registrants on the National Diabetes Services Scheme; and 3,100 members of the Australian Lung Foundation. A total of 2519 people responded, including 313 people who identified as informal carers. Statistical analysis was undertaken using Stata 11. Standard errors and confidence intervals were derived using bootstrapping techniques within Stata 11. Results Most carers (96.2%) had chronic illness themselves, and those with greater numbers of chronic illnesses were those who faced the greatest overall time demands. The top decile of carers devoted between 8.5 and 10 hours a day to personal and caring health related activities. Informal carers with chronic illness spent more time managing their own health than people with chronic illness who were not informal carers. These carers spent more time on caring for others than on caring for their own health. High levels of caring responsibility were associated with poorer reported carer health. Conclusions Policy and health care services will need to adapt to recognise and reduce the time burden on carers who themselves have chronic illness. More carefully targeted investment in the social infrastructure of formal care would free up carers for other

  7. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  8. The timing and sources of information for the adoption and implementation of production innovations

    NASA Technical Reports Server (NTRS)

    Ettlie, J. E.

    1976-01-01

    Two dimensions (personal-impersonal and internal-external) are used to characterize information sources as they become important during the interorganizational transfer of production innovations. The results of three studies are reviewed for the purpose of deriving a model of the timing and importance of different information sources and the utilization of new technology. Based on the findings of two retrospective studies, it was concluded that the pattern of information seeking behavior in user organizations during the awareness stage of adoption is not a reliable predictor of the eventual utilization rate. Using the additional findings of a real-time study, an empirical model of the relative importance of information sources for successful user organizations is presented. These results are extended and integrated into a theoretical model consisting of a time-profile of successful implementations and the relative importance of four types of information sources during seven stages of the adoption-implementation process.

  9. 75 FR 75725 - Financial Management Service; Proposed Collection of Information: Tax Time Card Account Pilot...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... Fiscal Service Financial Management Service; Proposed Collection of Information: Tax Time Card Account Pilot, Screening, Focus Groups, and Study AGENCY: Financial Management Service, Fiscal Service, Treasury. ACTION: Notice and request for comments. SUMMARY: The Financial Management Service, as part of...

  10. Waiting time information services: what are the implications of waiting list behaviour for their design?

    PubMed

    Cromwell, David; Griffiths, David

    2002-01-01

    In some countries, patients requiring elective surgery can access comparative waiting time information for various surgical units. What someone can deduce from this information will depend upon how the statistics are derived, and how waiting lists behave. However, empirical analyses of waiting list behaviour are scarce. This study analysed three years of waiting list data collected at one hospital in Sydney, Australia. The results highlight various issues that raise questions about using particular waiting time statistics to make inferences about patient waiting times. In particular, the results highlight the considerable variation in behaviour that can exist between surgeons in the same specialty, and that can occur over time.

  11. 18 CFR 701.204 - Time limits for WRC initial determinations regarding requests for information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Time limits for WRC initial determinations regarding requests for information. 701.204 Section 701.204 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information §...

  12. 18 CFR 701.204 - Time limits for WRC initial determinations regarding requests for information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Time limits for WRC initial determinations regarding requests for information. 701.204 Section 701.204 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information §...

  13. 18 CFR 701.204 - Time limits for WRC initial determinations regarding requests for information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Time limits for WRC initial determinations regarding requests for information. 701.204 Section 701.204 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information §...

  14. 18 CFR 701.204 - Time limits for WRC initial determinations regarding requests for information.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Time limits for WRC initial determinations regarding requests for information. 701.204 Section 701.204 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information §...

  15. 18 CFR 701.204 - Time limits for WRC initial determinations regarding requests for information.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Time limits for WRC initial determinations regarding requests for information. 701.204 Section 701.204 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information §...

  16. Accurate and Accidental Empathy.

    ERIC Educational Resources Information Center

    Chandler, Michael

    The author offers two controversial criticisms of what are rapidly becoming standard assessment procedures for the measurement of empathic skill. First, he asserts that assessment procedures which attend exclusively to the accuracy with which subjects are able to characterize other people's feelings provide little or no useful information about…

  17. Status and Future of a Real-time Global Flood Detection and Forecasting System Using Satellite Rainfall Information

    NASA Astrophysics Data System (ADS)

    Adler, R. F.; Wu, H.; Hong, Y.; Policelli, F.; Pierce, H.

    2011-12-01

    Over the last several years a Global Flood Monitoring System (GFMS) has been running in real-time to detect the occurrence of floods (see trmm.gsfc.nasa.gov and click on "Floods and Landslides"). The system uses 3-hr resolution composite rainfall analyses (TRMM Multi-satellite Precipitation Analysis [TMPA]) as input into a hydrological model that calculates water depth at each grid (at 0.25 degree latitude-longitude) over the tropics and mid-latitudes. These calculations can provide information useful to national and international agencies in understanding the location, intensity, timeline and impact on populations of these significant hazard events. The status of these flood calculations will be shown by case study examples and a statistical comparison against a global flood event database. The validation study indicates that results improve with longer duration (> 3 days) floods and that the statistics are impacted by the presence of dams, which are not accounted for in the model calculations. Limitations in the flood calculations that are related to the satellite rainfall estimates include space and time resolution limitations and underestimation of shallow orographic and monsoon system rainfall. The current quality of these flood estimations is at the level of being useful, but there is a potential for significant improvement, mainly through improved and more timely satellite precipitation information and improvement in the hydrological models being used. NASA's Global Precipitation Measurement (GPM) program should lead to better precipitation analyses utilizing space-time interpolations that maintain accurate intensity distributions along with methods to disaggregate the rain information research should lead to improved rain estimation for shallow, orographic rainfall systems and some types of monsoon rainfall, a current problem area for satellite rainfall. Higher resolution flood models with accurate routing and regional calibration, and the use of satellite

  18. MedTime: a temporal information extraction system for clinical narratives.

    PubMed

    Lin, Yu-Kai; Chen, Hsinchun; Brown, Randall A

    2013-12-01

    Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives.

  19. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  20. Comparison of Informal Care Time and Costs in Different Age-Related Dementias: A Review

    PubMed Central

    Costa, Nadège; Ferlicoq, Laura; Derumeaux-Burel, Hélène; Rapp, Thomas; Garnault, Valérie; Gillette-Guyonnet, Sophie; Andrieu, Sandrine; Vellas, Bruno; Lamure, Michel; Grand, Alain; Molinier, Laurent

    2013-01-01

    Objectives. Age-related dementia is a progressive degenerative brain syndrome whose prevalence increases with age. Dementias cause a substantial burden on society and on families who provide informal care. This study aims to review the relevant papers to compare informal care time and costs in different dementias. Methods. A bibliographic search was performed on an international medical literature database (MEDLINE). All studies which assessed the social economic burden of different dementias were selected. Informal care time and costs were analyzed in three care settings by disease stages. Results. 21 studies met our criteria. Mean informal care time was 55.73 h per week for Alzheimer disease and 15.8 h per week for Parkinson disease (P = 0.0076), and the associated mean annual informal costs were $17,492 versus $3,284, respectively (P = 0.0393). Conclusion. There is a lack of data about informal care time and costs among other dementias than AD or PD. Globally, AD is the most costly in terms of informal care costs than PD, $17,492 versus $3,284, respectively. PMID:23509789

  1. Decreasing Interferences and Time Spent on Transferring Information on Changing Nursing Shifts.

    PubMed

    Sans Torres, Elisenda; Albaladejo, Jessica Rubio; Benítez, Manuela

    2016-01-01

    The exchange of clinical information on patients is a common component in nursing shift changes where professionals have limited time to transfer this information. There is no standardized or structured methodology for transferring information, which requires increased time to complete. Also, during the exchange, some interruptions can disrupt the communication among professionals, which can affect the patient's safety. A descriptive study was developed for five months, the information transfer arrangement among nurses was changed in order to determine which interruption increased the time spent on shift change and, therefore, decreased the safety of pediatric patients. The results obtained on the type of interruption caused us to rethink the organization that includes pediatric patient care.

  2. Visualization of Time-Series Sensor Data to Inform the Design of Just-In-Time Adaptive Stress Interventions

    PubMed Central

    Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh

    2015-01-01

    We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566

  3. Time resolution dependence of information measures for spiking neurons: scaling and universality

    PubMed Central

    Marzen, Sarah E.; DeWeese, Michael R.; Crutchfield, James P.

    2015-01-01

    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step toward that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., τ-entropy rates that diverge less quickly than the firing rate indicated by interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes. PMID:26379538

  4. Negative chemical ionization gas chromatography coupled to hybrid quadrupole time-of-flight mass spectrometry and automated accurate mass data processing for determination of pesticides in fruit and vegetables.

    PubMed

    Besil, Natalia; Uclés, Samanta; Mezcúa, Milagros; Heinzen, Horacio; Fernández-Alba, Amadeo R

    2015-08-01

    Gas chromatography coupled to high resolution hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS), operating in negative chemical ionization (NCI) mode and combining full scan with MSMS experiments using accurate mass analysis, has been explored for the automated determination of pesticide residues in fruit and vegetables. Seventy compounds were included in this approach where 50 % of them are not approved by the EU legislation. A global 76 % of the analytes could be identified at 1 μg kg(-1). Recovery studies were developed at three concentration levels (1, 5, and 10 μg kg(-1)). Seventy-seven percent of the detected pesticides at the lowest level yielded recoveries within the 70 %-120 % range, whereas 94 % could be quantified at 5 μg kg(-1), and the 100 % were determined at 10 μg kg(-1). Good repeatability, expressed as relative standard deviation (RSD <20 %), was obtained for all compounds. The main drawback of the method was the limited dynamic range that was observed for some analytes that can be overcome either diluting the sample or lowering the injection volume. A home-made database was developed and applied to an automatic accurate mass data processing. Measured mass accuracies of the generated ions were mainly less than 5 ppm for at least one diagnostic ion. When only one ion was obtained in the single-stage NCI-MS, a representative product ion from MSMS experiments was used as identification criterion. A total of 30 real samples were analyzed and 67 % of the samples were positive for 12 different pesticides in the range 1.0-1321.3 μg kg(-1). PMID:25694145

  5. Information-theoretical analysis of time-correlated single-photon counting measurements of single molecules.

    PubMed

    Talaga, David S

    2009-04-30

    Time-correlated single photon counting allows luminescence lifetime information to be determined on a single molecule level. This paper develops a formalism to allow information theory analysis of the ability of luminescence lifetime measurements to resolve states in a single molecule. It analyzes the information content of the photon stream and the fraction of that information that is relevant to the state determination problem. Experimental losses of information due to instrument response, digitization, and different types of background are calculated and a procedure to determine the optimal value of experimental parameters is demonstrated. This paper shows how to use the information theoretical formalism to evaluate the number of photons required to distinguish dyes that differ only by lifetime. It extends this idea to include distinguishing molecular states that differ in the electron transfer quenching or resonant energy transfer and shows how the differences between the lifetime of signal and background can help distinguish the dye position in an excitation beam. PMID:19385684

  6. Economy with the time delay of information flow—The stock market case

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2012-02-01

    Any decision process requires information about the past and present state of the system, but in an economy acquiring data and processing it is an expensive and time-consuming task. Therefore, the state of the system is often measured over some legal interval, analysed after the end of well defined time periods and the results announced much later before any strategic decision is envisaged. The various time delay roles have to be crucially examined. Here, a model of stock market coupled with an economy is investigated to emphasise the role of the time delay span on the information flow. It is shown that the larger the time delay the more important the collective behaviour of agents since one observes time oscillations in the absolute log-return autocorrelations.

  7. Contribution of spike timing to the information transmitted by HVC neurons.

    PubMed

    Huetz, Chloé; Del Negro, Catherine; Lebas, Nicolas; Tarroux, Philippe; Edeline, Jean-Marc

    2006-08-01

    In many species, neurons with highly selective stimulus-response properties characterize higher order sensory areas and/or sensory motor areas of the CNS. In the songbird nuclei, the responses of HVC (used as a proper name) neurons during playback of the bird's own song (BOS) are probably one of the most striking examples of selectivity for natural stimuli. We examined here to what extent spike-timing carries information about natural and time-reversed versions of the BOS. From a heterogenous population of 107 HVC neurons recorded in long-day or short-day conditions, a standard indicator of stimulus preference based on spike-count (the d' index) indicates that a limited proportion of cells can be classified as selective for the BOS (20% with a |d'| > 1). In contrast, quantifying the information conveyed by spike trains with the metric-space of J.D. Victor & K.P Purpura [(1996) J. Neurophysiol., 76, 1310-1326] indicates that 62% of the cells display significant amounts of transmitted information, among which 77% are 'temporal cells'. 'Temporal cells' correspond to cells transmitting significant amounts of information when spike-timing is considered, whereas no information, or lower amounts of transmitted information, is obtained when only spike-count is considered. Computing a correlation index between spike trains [S. Schreiber et al. (2003) Neurocomputing, 52-54,925-931] revealed that spike-timing reliability is higher for the forward than for the reverse BOS, whatever the day length and the cell type are. Cells classified as selective in terms of spike-counts (d' index) had greater amounts of transmitted information, but cells classified as non-selective (d' < 0.5) can also transmit significant amounts of information. Thus, information theory methods demonstrate that a much larger proportion of neurons than expected based on spike-count only participate in the discrimination between stimuli.

  8. Expansion and Compression of Time Correlate with Information Processing in an Enumeration Task.

    PubMed

    Wutz, Andreas; Shukla, Anuj; Bapi, Raju S; Melcher, David

    2015-01-01

    Perception of temporal duration is subjective and is influenced by factors such as attention and context. For example, unexpected or emotional events are often experienced as if time subjectively expands, suggesting that the amount of information processed in a unit of time can be increased. Time dilation effects have been measured with an oddball paradigm in which an infrequent stimulus is perceived to last longer than standard stimuli in the rest of the sequence. Likewise, time compression for the oddball occurs when the duration of the standard items is relatively brief. Here, we investigated whether the amount of information processing changes when time is perceived as distorted. On each trial, an oddball stimulus of varying numerosity (1-14 items) and duration was presented along with standard items that were either short (70 ms) or long (1050 ms). Observers were instructed to count the number of dots within the oddball stimulus and to judge its relative duration with respect to the standards on that trial. Consistent with previous results, oddballs were reliably perceived as temporally distorted: expanded for longer standard stimuli blocks and compressed for shorter standards. The occurrence of these distortions of time perception correlated with perceptual processing; i.e. enumeration accuracy increased when time was perceived as expanded and decreased with temporal compression. These results suggest that subjective time distortions are not epiphenomenal, but reflect real changes in sensory processing. Such short-term plasticity in information processing rate could be evolutionarily advantageous in optimizing perception and action during critical moments.

  9. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  10. Position information by signal analysis in real time from resistive anode microchannel plate detector

    NASA Astrophysics Data System (ADS)

    Saha, K.; Benmaimon, R.; Prabhakaran, A.; Rappaport, M. L.; Heber, O.; Schwalm, D.; Zajfman, D.

    2016-07-01

    Resistive anode multichannel plate detectors are extensively used for imaging photons, electrons and ions. We present a method to acquire position information from such detector systems by considering simple parameters of the signals produced from the resistive anode encoder. Our technique is easy to implement and computes position in real time during experiments. Position information can be obtained using our method without the need for dedicated position analyser units.

  11. Measuring trends in performance across time: providing information to cancer patients.

    PubMed

    Fitch, Margaret I; McAndrew, Alison; Harth, Tamara

    2013-01-01

    Providing relevant, up-to-date information is identified as a quality standard of cancer care. Cancer programs need to be able to evaluate whether they are meeting the standard and to monitor their performance on an ongoing basis. Routine collection of clearly defined data, using reliable and valid measures, provides cancer program leaders with dependable information upon which to make decisions and monitor trends in performance over time. This article describes one cancer centre's experience in using standardized data collection regarding provision of patient information. The Cancer Patient Information Importance-Satisfaction Scale has been administered routinely in an outpatient setting over eight years. The profile we create from the data assists us in making informed decisions about patient education initiatives.

  12. Information-sharing tendency on Twitter and time evolution of tweeting

    NASA Astrophysics Data System (ADS)

    Kwon, H. W.; Kim, H. S.; Lee, K.; Choi, M. Y.

    2013-03-01

    While topics on Twitter may be categorized according to their predictability and sustainability, some topics have characteristics depending on the time scale. Here we propose a good measure for the transition of sustainability, which we call the information-sharing tendency, and find that the unpredictability on Twitter is provoked by the exposure of Twitter users to external environments, e.g., mass media and other social network services. In addition, it is demonstrated that the numbers of articles and comments on on-line newspapers serve as plausible measures of exposure. From such measures of exposure, the time evolution of tweeting can be described, when the information-sharing tendency is known.

  13. The timing of the human circadian clock is accurately represented by the core body temperature rhythm following phase shifts to a three-cycle light stimulus near the critical zone

    NASA Technical Reports Server (NTRS)

    Jewett, M. E.; Duffy, J. F.; Czeisler, C. A.

    2000-01-01

    A double-stimulus experiment was conducted to evaluate the phase of the underlying circadian clock following light-induced phase shifts of the human circadian system. Circadian phase was assayed by constant routine from the rhythm in core body temperature before and after a three-cycle bright-light stimulus applied near the estimated minimum of the core body temperature rhythm. An identical, consecutive three-cycle light stimulus was then applied, and phase was reassessed. Phase shifts to these consecutive stimuli were no different from those obtained in a previous study following light stimuli applied under steady-state conditions over a range of circadian phases similar to those at which the consecutive stimuli were applied. These data suggest that circadian phase shifts of the core body temperature rhythm in response to a three-cycle stimulus occur within 24 h following the end of the 3-day light stimulus and that this poststimulus temperature rhythm accurately reflects the timing of the underlying circadian clock.

  14. Novel accurate bacterial discrimination by MALDI-time-of-flight MS based on ribosomal proteins coding in S10-spc-alpha operon at strain level S10-GERMS.

    PubMed

    Tamura, Hiroto; Hotta, Yudai; Sato, Hiroaki

    2013-08-01

    Matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is one of the most widely used mass-based approaches for bacterial identification and classification because of the simple sample preparation and extremely rapid analysis within a few minutes. To establish the accurate MALDI-TOF MS bacterial discrimination method at strain level, the ribosomal subunit proteins coded in the S10-spc-alpha operon, which encodes half of the ribosomal subunit protein and is highly conserved in eubacterial genomes, were selected as reliable biomarkers. This method, named the S10-GERMS method, revealed that the strains of genus Pseudomonas were successfully identified and discriminated at species and strain levels, respectively; therefore, the S10-GERMS method was further applied to discriminate the pathovar of P. syringae. The eight selected biomarkers (L24, L30, S10, S12, S14, S16, S17, and S19) suggested the rapid discrimination of P. syringae at the strain (pathovar) level. The S10-GERMS method appears to be a powerful tool for rapid and reliable bacterial discrimination and successful phylogenetic characterization. In this article, an overview of the utilization of results from the S10-GERMS method is presented, highlighting the characterization of the Lactobacillus casei group and discrimination of the bacteria of genera Bacillus and Sphingopyxis despite only two and one base difference in the 16S rRNA gene sequence, respectively.

  15. Novel Accurate Bacterial Discrimination by MALDI-Time-of-Flight MS Based on Ribosomal Proteins Coding in S10-spc-alpha Operon at Strain Level S10-GERMS

    NASA Astrophysics Data System (ADS)

    Tamura, Hiroto; Hotta, Yudai; Sato, Hiroaki

    2013-08-01

    Matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is one of the most widely used mass-based approaches for bacterial identification and classification because of the simple sample preparation and extremely rapid analysis within a few minutes. To establish the accurate MALDI-TOF MS bacterial discrimination method at strain level, the ribosomal subunit proteins coded in the S 10-spc-alpha operon, which encodes half of the ribosomal subunit protein and is highly conserved in eubacterial genomes, were selected as reliable biomarkers. This method, named the S10-GERMS method, revealed that the strains of genus Pseudomonas were successfully identified and discriminated at species and strain levels, respectively; therefore, the S10-GERMS method was further applied to discriminate the pathovar of P. syringae. The eight selected biomarkers (L24, L30, S10, S12, S14, S16, S17, and S19) suggested the rapid discrimination of P. syringae at the strain (pathovar) level. The S10-GERMS method appears to be a powerful tool for rapid and reliable bacterial discrimination and successful phylogenetic characterization. In this article, an overview of the utilization of results from the S10-GERMS method is presented, highlighting the characterization of the Lactobacillus casei group and discrimination of the bacteria of genera Bacillus and Sphingopyxis despite only two and one base difference in the 16S rRNA gene sequence, respectively.

  16. A Strategy for Functional Interpretation of Metabolomic Time Series Data in Context of Metabolic Network Information

    PubMed Central

    Nägele, Thomas; Fürtauer, Lisa; Nagler, Matthias; Weiszmann, Jakob; Weckwerth, Wolfram

    2016-01-01

    The functional connection of experimental metabolic time series data with biochemical network information is an important, yet complex, issue in systems biology. Frequently, experimental analysis of diurnal, circadian, or developmental dynamics of metabolism results in a comprehensive and multidimensional data matrix comprising information about metabolite concentrations, protein levels, and/or enzyme activities. While, irrespective of the type of organism, the experimental high-throughput analysis of the transcriptome, proteome, and metabolome has become a common part of many systems biological studies, functional data integration in a biochemical and physiological context is still challenging. Here, an approach is presented which addresses the functional connection of experimental time series data with biochemical network information which can be inferred, for example, from a metabolic network reconstruction. Based on a time-continuous and variance-weighted regression analysis of experimental data, metabolic functions, i.e., first-order derivatives of metabolite concentrations, were related to time-dependent changes in other biochemically relevant metabolic functions, i.e., second-order derivatives of metabolite concentrations. This finally revealed time points of perturbed dependencies in metabolic functions indicating a modified biochemical interaction. The approach was validated using previously published experimental data on a diurnal time course of metabolite levels, enzyme activities, and metabolic flux simulations. To support and ease the presented approach of functional time series analysis, a graphical user interface including a test data set and a manual is provided which can be run within the numerical software environment Matlab®. PMID:27014700

  17. A Strategy for Functional Interpretation of Metabolomic Time Series Data in Context of Metabolic Network Information.

    PubMed

    Nägele, Thomas; Fürtauer, Lisa; Nagler, Matthias; Weiszmann, Jakob; Weckwerth, Wolfram

    2016-01-01

    The functional connection of experimental metabolic time series data with biochemical network information is an important, yet complex, issue in systems biology. Frequently, experimental analysis of diurnal, circadian, or developmental dynamics of metabolism results in a comprehensive and multidimensional data matrix comprising information about metabolite concentrations, protein levels, and/or enzyme activities. While, irrespective of the type of organism, the experimental high-throughput analysis of the transcriptome, proteome, and metabolome has become a common part of many systems biological studies, functional data integration in a biochemical and physiological context is still challenging. Here, an approach is presented which addresses the functional connection of experimental time series data with biochemical network information which can be inferred, for example, from a metabolic network reconstruction. Based on a time-continuous and variance-weighted regression analysis of experimental data, metabolic functions, i.e., first-order derivatives of metabolite concentrations, were related to time-dependent changes in other biochemically relevant metabolic functions, i.e., second-order derivatives of metabolite concentrations. This finally revealed time points of perturbed dependencies in metabolic functions indicating a modified biochemical interaction. The approach was validated using previously published experimental data on a diurnal time course of metabolite levels, enzyme activities, and metabolic flux simulations. To support and ease the presented approach of functional time series analysis, a graphical user interface including a test data set and a manual is provided which can be run within the numerical software environment Matlab®.

  18. Social contagion process in informal warning networks to understand evacuation timing behavior.

    PubMed

    Hasan, Samiul; Ukkusuri, Satish V

    2013-01-01

    Individual evacuation decisions are often characterized by the influence of one's social network, referred to as informal warning network. In this article, a threshold model of social contagion, originally introduced in the network science literature, is proposed to characterize this social influence in the evacuation decision-making process, in particular the timing of evacuation decision. Simulation models are developed to investigate the effects of community mixing patterns and the strength of ties on timing of evacuation decision.

  19. A brief description of the Medical Information Computer System (MEDICS). [real time minicomputer system

    NASA Technical Reports Server (NTRS)

    Moseley, E. C.

    1974-01-01

    The Medical Information Computer System (MEDICS) is a time shared, disk oriented minicomputer system capable of meeting storage and retrieval needs for the space- or non-space-related applications of at least 16 simultaneous users. At the various commercially available low cost terminals, the simple command and control mechanism and the generalized communication activity of the system permit multiple form inputs, real-time updating, and instantaneous retrieval capability with a full range of options.

  20. Poor Sleep Quality Predicts Deficient Emotion Information Processing over Time in Early Adolescence

    PubMed Central

    Soffer-Dudek, Nirit; Sadeh, Avi; Dahl, Ronald E.; Rosenblat-Stein, Shiran

    2011-01-01

    Study Objectives: There is deepening understanding of the effects of sleep on emotional information processing. Emotion information processing is a key aspect of social competence, which undergoes important maturational and developmental changes in adolescence; however, most research in this area has focused on adults. Our aim was to test the links between sleep and emotion information processing during early adolescence. Design: Sleep and facial information processing were assessed objectively during 3 assessment waves, separated by 1-year lags. Setting: Data were obtained in natural environments—sleep was assessed in home settings, and facial information processing was assessed at school. Participants: 94 healthy children (53 girls, 41 boys), aged 10 years at Time 1. Interventions: N/A Measurements and Results: Facial information processing was tested under neutral (gender identification) and emotional (emotional expression identification) conditions. Sleep was assessed in home settings using actigraphy for 7 nights at each assessment wave. Waking > 5 min was considered a night awakening. Using multilevel modeling, elevated night awakenings and decreased sleep efficiency significantly predicted poor performance only in the emotional information processing condition (e.g., b = −1.79, SD = 0.52, confidence interval: lower boundary = −2.82, upper boundary = −0.076, t(416.94) = −3.42, P = 0.001). Conclusions: Poor sleep quality is associated with compromised emotional information processing during early adolescence, a sensitive period in socio-emotional development. Citation: Soffer-Dudek N; Sadeh A; Dahl RE; Rosenblat-Stein S. Poor sleep quality predicts deficient emotion information processing over time in early adolescence. SLEEP 2011;34(11):1499-1508. PMID:22043121

  1. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  2. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    PubMed

    Imaizumi, Kazuo; Priebe, Nicholas J; Sharpee, Tatyana O; Cheung, Steven W; Schreiner, Christoph E

    2010-07-19

    A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (ISI). To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF) to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  3. Information-transmission rates in manual control of unstable systems with time delays.

    PubMed

    Lupu, Mircea F; Sun, Mingui; Wang, Fei-Yue; Mao, Zhi-Hong

    2015-01-01

    In analyzing the human-machine interaction (HMI), a human-centered approach is needed to address the potential and limitation of human control, especially in the control of high-order or unstable systems. However, there is no quantitative measure of the human performance or cognitive workload in these difficult HMI tasks. We propose to characterize the HMI as information flows quantified by the information-transmission rate in bits per second (b/s). Using information- and control-theoretic approaches, we derive the minimum rates of information transmission in manual control required by any deterministic controller to stabilize the feedback system. Furthermore, we suggest a method adopted from time-series analysis to estimate the information-transmission rate from human experiments. We show that the relationship between the empirically estimated information rates and the minimum bounds allows for the quantitative indication of the potential and limitation of human manual control. We illustrate our method in the control of an inverted pendulum with time delays.

  4. TEMPI: probabilistic modeling time-evolving differential PPI networks with multiPle information

    PubMed Central

    Kim, Yongsoo; Jang, Jin-Hyeok; Choi, Seungjin; Hwang, Daehee

    2014-01-01

    Motivation: Time-evolving differential protein–protein interaction (PPI) networks are essential to understand serial activation of differentially regulated (up- or downregulated) cellular processes (DRPs) and their interplays over time. Despite developments in the network inference, current methods are still limited in identifying temporal transition of structures of PPI networks, DRPs associated with the structural transition and the interplays among the DRPs over time. Results: Here, we present a probabilistic model for estimating Time-Evolving differential PPI networks with MultiPle Information (TEMPI). This model describes probabilistic relationships among network structures, time-course gene expression data and Gene Ontology biological processes (GOBPs). By maximizing the likelihood of the probabilistic model, TEMPI estimates jointly the time-evolving differential PPI networks (TDNs) describing temporal transition of PPI network structures together with serial activation of DRPs associated with transiting networks. This joint estimation enables us to interpret the TDNs in terms of temporal transition of the DRPs. To demonstrate the utility of TEMPI, we applied it to two time-course datasets. TEMPI identified the TDNs that correctly delineated temporal transition of DRPs and time-dependent associations between the DRPs. These TDNs provide hypotheses for mechanisms underlying serial activation of key DRPs and their temporal associations. Availability and implementation: Source code and sample data files are available at http://sbm.postech.ac.kr/tempi/sources.zip. Contact: seungjin@postech.ac.kr or dhwang@dgist.ac.kr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161233

  5. An Understanding Information Management System for a Real-Time Interactive Distance Education Environment

    ERIC Educational Resources Information Center

    He, Aiguo

    2009-01-01

    A real-time interactive distance lecture is a joint work that should be accomplished by the effort of the lecturer and his students in remote sites. It is important for the lecturer to get understanding information from the students which cannot be efficiently collected by only using video/audio channels between the lecturer and the students. This…

  6. 16 CFR 803.21 - Additional information shall be supplied within reasonable time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Additional information shall be supplied within reasonable time. 803.21 Section 803.21 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF...

  7. 16 CFR 803.21 - Additional information shall be supplied within reasonable time.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Additional information shall be supplied within reasonable time. 803.21 Section 803.21 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF...

  8. OZONE MONITORING, MAPPING, AND PUBLIC OUTREACH: DELIVERING REAL-TIME OZONE INFORMATION TO YOUR COMMUNITY

    EPA Science Inventory

    The U.S. EPA had developed a handbook to help state and local government officials implement ozone monitoring, mapping, and outreach programs. The handbook, called Ozone Monitoring, Mapping, and Public Outreach: Delivering Real-Time Ozone Information to Your Community, provides ...

  9. Placement Decisions for First-Time-in-College Students Using the Computerized Placement Test. Information Capsule.

    ERIC Educational Resources Information Center

    Bashford, Joanne

    This information capsule explores the effectiveness of score ranges on the Computerized Placement Test (CPT), used to assess the skills of entry-level students at Miami-Dade Community College and place first-time-in-college students in classes. Data are provided for students entering in Fall terms 1996 and 1997 showing the number of students…

  10. Time and Technology: A Decade-Long Look at Humanists' Use of Electronic Information Technology.

    ERIC Educational Resources Information Center

    Wiberley, Stephen E., Jr.; Jones, William G.

    2000-01-01

    Describes a ten-year study of humanists that revealed the impact of temporal factors on their adoption of electronic information technology. Identifies four types of time that influenced humanists' behavior; discusses how electronic resources content affects whether a scholar adopts an electronic resource; and suggests how academic librarians can…

  11. Managing Information Technology as a Catalyst of Change. Track I: Leadership during Times of Change.

    ERIC Educational Resources Information Center

    CAUSE, Boulder, CO.

    Eight papers are presented from the 1993 CAUSE conference track on leadership challenges facing managers of information technology during times of change at colleges and universities. Papers include: (1) "ASURITE: How To Avoid Creating a Distributed Computing 'Tower of Babel'!" (Neil Armann and others), which discusses the Arizona State University…

  12. Solving Word Problems about Time: The Effects of Speed and Space Information.

    ERIC Educational Resources Information Center

    Senechal, Monique

    This study investigated how preadolescents and adolescents solve problems involving three temporal dimensions. Specifically examined was the question of whether speed and space information would influence the time judgments of 90 subjects 9, 12, and 15 years of age who solved 16 word problems describing the displacements of two cars. The problems…

  13. Just-in-Time Information Improved Decision-Making in Primary Care: A Randomized Controlled Trial

    PubMed Central

    McGowan, Jessie; Hogg, William; Campbell, Craig; Rowan, Margo

    2008-01-01

    Background The “Just-in-time Information” (JIT) librarian consultation service was designed to provide rapid information to answer primary care clinical questions during patient hours. This study evaluated whether information provided by librarians to answer clinical questions positively impacted time, decision-making, cost savings and satisfaction. Methods and Finding A randomized controlled trial (RCT) was conducted between October 2005 and April 2006. A total of 1,889 questions were sent to the service by 88 participants. The object of the randomization was a clinical question. Each participant had clinical questions randomly allocated to both intervention (librarian information) and control (no librarian information) groups. Participants were trained to send clinical questions via a hand-held device. The impact of the information provided by the service (or not provided by the service), additional resources and time required for both groups was assessed using a survey sent 24 hours after a question was submitted. The average time for JIT librarians to respond to all questions was 13.68 minutes/question (95% CI, 13.38 to 13.98). The average time for participants to respond their control questions was 20.29 minutes/question (95% CI, 18.72 to 21.86). Using an impact assessment scale rating cognitive impact, participants rated 62.9% of information provided to intervention group questions as having a highly positive cognitive impact. They rated 14.8% of their own answers to control question as having a highly positive cognitive impact, 44.9% has having a negative cognitive impact, and 24.8% with no cognitive impact at all. In an exit survey measuring satisfaction, 86% (62/72 responses) of participants scored the service as having a positive impact on care and 72% (52/72) indicated that they would use the service frequently if it were continued. Conclusions In this study, providing timely information to clinical questions had a highly positive impact on decision

  14. Codimension-Two Bifurcation, Chaos and Control in a Discrete-Time Information Diffusion Model

    NASA Astrophysics Data System (ADS)

    Ren, Jingli; Yu, Liping

    2016-07-01

    In this paper, we present a discrete model to illustrate how two pieces of information interact with online social networks and investigate the dynamics of discrete-time information diffusion model in three types: reverse type, intervention type and mutualistic type. It is found that the model has orbits with period 2, 4, 6, 8, 12, 16, 20, 30, quasiperiodic orbit, and undergoes heteroclinic bifurcation near 1:2 point, a homoclinic structure near 1:3 resonance point and an invariant cycle bifurcated by period 4 orbit near 1:4 resonance point. Moreover, in order to regulate information diffusion process and information security, we give two control strategies, the hybrid control method and the feedback controller of polynomial functions, to control chaos, flip bifurcation, 1:2, 1:3 and 1:4 resonances, respectively, in the two-dimensional discrete system.

  15. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  16. Time-based self-spacing techniques using cockpit display of traffic information during approach to landing in a terminal area vectoring environment

    NASA Technical Reports Server (NTRS)

    Williams, D. H.

    1983-01-01

    A simulation study was undertaken to evaluate two time-based self-spacing techniques for in-trail following during terminal area approach. An electronic traffic display was provided in the weather radarscope location. The displayed self-spacing cues allowed the simulated aircraft to follow and to maintain spacing on another aircraft which was being vectored by air traffic control (ATC) for landing in a high-density terminal area. Separation performance data indicate the information provided on the traffic display was adequate for the test subjects to accurately follow the approach path of another aircraft without the assistance of ATC. The time-based technique with a constant-delay spacing criterion produced the most satisfactory spacing performance. Pilot comments indicate the workload associated with the self-separation task was very high and that additional spacing command information and/or aircraft autopilot functions would be desirable for operational implementational of the self-spacing task.

  17. Real-Time Diffusion of Information on Twitter and the Financial Markets.

    PubMed

    Tafti, Ali; Zotti, Ryan; Jank, Wolfgang

    2016-01-01

    Do spikes in Twitter chatter about a firm precede unusual stock market trading activity for that firm? If so, Twitter activity may provide useful information about impending financial market activity in real-time. We study the real-time relationship between chatter on Twitter and the stock trading volume of 96 firms listed on the Nasdaq 100, during 193 days of trading in the period from May 21, 2012 to September 18, 2013. We identify observations featuring firm-specific spikes in Twitter activity, and randomly assign each observation to a ten-minute increment matching on the firm and a number of repeating time indicators. We examine the extent that unusual levels of chatter on Twitter about a firm portend an oncoming surge of trading of its stock within the hour, over and above what would normally be expected for the stock for that time of day and day of week. We also compare the findings from our explanatory model to the predictive power of Tweets. Although we find a compelling and potentially informative real-time relationship between Twitter activity and trading volume, our forecasting exercise highlights how difficult it can be to make use of this information for monetary gain.

  18. Real-Time Diffusion of Information on Twitter and the Financial Markets.

    PubMed

    Tafti, Ali; Zotti, Ryan; Jank, Wolfgang

    2016-01-01

    Do spikes in Twitter chatter about a firm precede unusual stock market trading activity for that firm? If so, Twitter activity may provide useful information about impending financial market activity in real-time. We study the real-time relationship between chatter on Twitter and the stock trading volume of 96 firms listed on the Nasdaq 100, during 193 days of trading in the period from May 21, 2012 to September 18, 2013. We identify observations featuring firm-specific spikes in Twitter activity, and randomly assign each observation to a ten-minute increment matching on the firm and a number of repeating time indicators. We examine the extent that unusual levels of chatter on Twitter about a firm portend an oncoming surge of trading of its stock within the hour, over and above what would normally be expected for the stock for that time of day and day of week. We also compare the findings from our explanatory model to the predictive power of Tweets. Although we find a compelling and potentially informative real-time relationship between Twitter activity and trading volume, our forecasting exercise highlights how difficult it can be to make use of this information for monetary gain. PMID:27504639

  19. Real-Time Diffusion of Information on Twitter and the Financial Markets

    PubMed Central

    Tafti, Ali; Zotti, Ryan; Jank, Wolfgang

    2016-01-01

    Do spikes in Twitter chatter about a firm precede unusual stock market trading activity for that firm? If so, Twitter activity may provide useful information about impending financial market activity in real-time. We study the real-time relationship between chatter on Twitter and the stock trading volume of 96 firms listed on the Nasdaq 100, during 193 days of trading in the period from May 21, 2012 to September 18, 2013. We identify observations featuring firm-specific spikes in Twitter activity, and randomly assign each observation to a ten-minute increment matching on the firm and a number of repeating time indicators. We examine the extent that unusual levels of chatter on Twitter about a firm portend an oncoming surge of trading of its stock within the hour, over and above what would normally be expected for the stock for that time of day and day of week. We also compare the findings from our explanatory model to the predictive power of Tweets. Although we find a compelling and potentially informative real-time relationship between Twitter activity and trading volume, our forecasting exercise highlights how difficult it can be to make use of this information for monetary gain. PMID:27504639

  20. Consensus of Euler-Lagrange systems networked by sampled-data information with probabilistic time delays.

    PubMed

    Ma, Chao; Shi, Peng; Zhao, Xudong; Zeng, Qingshuang

    2015-06-01

    This paper investigates the consensus problem of multiple Euler-Lagrange systems under directed topology. Unlike the common assumptions on continuous-time information exchanges, a more realistic sampled-data communication strategy is proposed with probabilistic occurrence of time-varying delays. Both of the sampling period and the delays are assumed to be time-varying, which is more general in some practical situations. In addition, the relative coordinate derivative information is not required in the distributed controllers such that the communication network burden can be further reduced. In particular, a distinct feature of the proposed scheme lies in the fact that it can effectively reduce the energy consumption. By employing the stochastic analysis techniques, sufficient conditions are established to guarantee that the consensus can be achieved. Finally, a numerical example is provided to illustrate the applicability and benefits of the theoretical results.

  1. Matrix-Assisted Laser Desorption Ionization–Time of Flight (MALDI-TOF) Mass Spectrometry Using the Vitek MS System for Rapid and Accurate Identification of Dermatophytes on Solid Cultures

    PubMed Central

    Monnin, Valérie; Girard, Victoria; Welker, Martin; Arsac, Maud; Cellière, Béatrice; Durand, Géraldine; Bosshard, Philipp P.; Farina, Claudio; Passera, Marco; Van Belkum, Alex; Petrini, Orlando; Tonolla, Mauro

    2014-01-01

    The objective of this research was to extend the Vitek MS fungal knowledge base version 2.0.0 to allow the robust identification of clinically relevant dermatophytes, using a variety of strains, incubation times, and growth conditions. First, we established a quick and reliable method for sample preparation to obtain a reliable and reproducible identification independently of the growth conditions. The Vitek MS V2.0.0 fungal knowledge base was then expanded using 134 well-characterized strains belonging to 17 species in the genera Epidermophyton, Microsporum, and Trichophyton. Cluster analysis based on mass spectrum similarity indicated good species discrimination independently of the culture conditions. We achieved a good separation of the subpopulations of the Trichophyton anamorph of Arthroderma benhamiae and of anthropophilic and zoophilic strains of Trichophyton interdigitale. Overall, the 1,130 mass spectra obtained for dermatophytes gave an estimated identification performance of 98.4%. The expanded fungal knowledge base was then validated using 131 clinical isolates of dermatophytes belonging to 13 taxa. For 8 taxa all strains were correctly identified, and for 3 the rate of successful identification was >90%; 75% (6/8) of the M. gypseum strains were correctly identified, whereas only 47% (18/38) of the African T. rubrum population (also called T. soudanense) were recognized accurately, with a large quantity of strains misidentified as T. violaceum, demonstrating the close relationship of these two taxa. The method of sample preparation was fast and efficient and the expanded Vitek MS fungal knowledge base reliable and robust, allowing reproducible dermatophyte identifications in the routine laboratory. PMID:25297329

  2. Techniques for optimizing human-machine information transfer related to real-time interactive display systems

    NASA Technical Reports Server (NTRS)

    Granaas, Michael M.; Rhea, Donald C.

    1989-01-01

    In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.

  3. On continuous-time two person full-information best choice problem with imperfect observation

    SciTech Connect

    Porosinski, Z.; Szajowski, K.

    1994-12-31

    A zero-sum game version of the continuous-time full-information best choice problem is considered. Two players observe sequentially a stream of iid random variables from a known continuous distribution appearing according to some renewal process with the object of choosing the largest one. The horizon of observation is a positive random variable independent of observations. The observations of the random variables are imperfect and the players are informed only whether it is greater than or less than some levels specified by both of them. The normal form of the game is derived. Poisson horizon case is examined in detail.

  4. Information Gap Analysis: near real-time evaluation of disaster response

    NASA Astrophysics Data System (ADS)

    Girard, Trevor

    2014-05-01

    Disasters, such as major storm events or earthquakes, trigger an immediate response by the disaster management system of the nation in question. The quality of this response is a large factor in its ability to limit the impacts on the local population. Improving the quality of disaster response therefore reduces disaster impacts. Studying past disasters is a valuable exercise to understand what went wrong, identify measures which could have mitigated these issues, and make recommendations to improve future disaster planning and response. While such ex post evaluations can lead to improvements in the disaster management system, there are limitations. The main limitation that has influenced this research is that ex post evaluations do not have the ability to inform the disaster response being assessed for the obvious reason that they are carried out long after the response phase is over. The result is that lessons learned can only be applied to future disasters. In the field of humanitarian relief, this limitation has led to the development of real time evaluations. The key aspect of real time humanitarian evaluations is that they are completed while the operation is still underway. This results in findings being delivered at a time when they can still make a difference to the humanitarian response. Applying such an approach to the immediate disaster response phase requires an even shorter time-frame, as well as a shift in focus from international actors to the nation in question's government. As such, a pilot study was started and methodology developed, to analyze disaster response in near real-time. The analysis uses the information provided by the disaster management system within the first 0 - 5 days of the response. The data is collected from publicly available sources such as ReliefWeb and sorted under various categories which represent each aspect of disaster response. This process was carried out for 12 disasters. The quantity and timeliness of information

  5. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    SciTech Connect

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  6. Measuring information interactions on the ordinal pattern of stock time series.

    PubMed

    Zhao, Xiaojun; Shang, Pengjian; Wang, Jing

    2013-02-01

    The interactions among time series as individual components of complex systems can be quantified by measuring to what extent they exchange information among each other. In many applications, one focuses not on the original series but on its ordinal pattern. In such cases, trivial noises appear more likely to be filtered and the abrupt influence of extreme values can be weakened. Cross-sample entropy and inner composition alignment have been introduced as prominent methods to estimate the information interactions of complex systems. In this paper, we modify both methods to detect the interactions among the ordinal pattern of stock return and volatility series, and we try to uncover the information exchanges across sectors in Chinese stock markets.

  7. Real Time Alert System: A Disease Management System Leveraging Health Information Exchange

    PubMed Central

    Anand, Vibha; Sheley, Meena E.; Xu, Shawn; Downs, Stephen M.

    2012-01-01

    Background Rates of preventive and disease management services can be improved by providing automated alerts and reminders to primary care providers (PCPs) using of health information technology (HIT) tools. Methods: Using Adaptive Turnaround Documents (ATAD), an existing Health Information Exchange (HIE) infrastructure and office fax machines, we developed a Real Time Alert (RTA) system. RTA is a computerized decision support system (CDSS) that is able to deliver alerts to PCPs statewide for recommended services around the time of the patient visit. RTA is also able to capture structured clinical data from providers using existing fax technology. In this study, we evaluate RTA’s performance for alerting PCPs when their patients with asthma have an emergency room visit anywhere in the state. Results: Our results show that RTA was successfully able to deliver “just in time” patient-relevant alerts to PCPs across the state. Furthermore, of those ATADs faxed back and automatically interpreted by the RTA system, 35% reported finding the provided information helpful. The PCPs who reported finding information helpful also reported making a phone call, sending a letter or seeing the patient for follow up care. Conclusions: We have successfully demonstrated the feasibility of electronically exchanging important patient related information with the PCPs statewide. This is despite a lack of a link with their electronic health records. We have shown that using our ATAD technology, a PCP can be notified quickly of an important event such as a patient’s asthma related emergency room admission so further follow up can happen in near real time. PMID:23569648

  8. Effect of the time window on the heat-conduction information filtering model

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Song, Wen-Jun; Hou, Lei; Zhang, Yi-Lu; Liu, Jian-Guo

    2014-05-01

    Recommendation systems have been proposed to filter out the potential tastes and preferences of the normal users online, however, the physics of the time window effect on the performance is missing, which is critical for saving the memory and decreasing the computation complexity. In this paper, by gradually expanding the time window, we investigate the impact of the time window on the heat-conduction information filtering model with ten similarity measures. The experimental results on the benchmark dataset Netflix indicate that by only using approximately 11.11% recent rating records, the accuracy could be improved by an average of 33.16% and the diversity could be improved by 30.62%. In addition, the recommendation performance on the dataset MovieLens could be preserved by only considering approximately 10.91% recent records. Under the circumstance of improving the recommendation performance, our discoveries possess significant practical value by largely reducing the computational time and shortening the data storage space.

  9. Dynamic behavior in two-route bus traffic system with real-time information

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    2014-11-01

    We study the dynamics of two-route bus traffic system with two buses using a bus choice scenario. The two-route bus traffic system with real-time information is not consistent with the two-route vehicular traffic system but is similar to the vehicular system. The two-route bus traffic system is mimicked by the physically dynamic model. The dynamics of two-route bus traffic system is described by a couple of nonlinear maps. A bus runs a neck- and neck race with another bus. The motion of two buses displays an irregular behavior in a complex manner. We clarify the effect of real-time information on the bus traffic.

  10. On the implementation of a real-time information security architecture in frequency domain

    NASA Astrophysics Data System (ADS)

    Basu, Abhishek; Sarkar, Souvik; Sarkar, Subir Kumar

    2015-12-01

    This paper presents the real-time implementation of a watermarking-based information security architecture in frequency domain. The scheme emphasises on the human visual system (HVS)-supported watermarking approach using wavelet-lifting technique. In addition to HVS, image registration algorithm is also introduced in order to increase the resiliency as well as the security of the estimated recovered watermark image. The algorithmic steps with optimisation considerations about the real-time implementation on TMS320CDSK6416/6713 fixed/floating point digital signal processor are also projected.

  11. Time dependent Schrödinger equation for black hole evaporation: No information loss

    SciTech Connect

    Corda, Christian

    2015-02-15

    In 1976 S. Hawking claimed that “Because part of the information about the state of the system is lost down the hole, the final situation is represented by a density matrix rather than a pure quantum state”. This was the starting point of the popular “black hole (BH) information paradox”. In a series of papers, together with collaborators, we naturally interpreted BH quasi-normal modes (QNMs) in terms of quantum levels discussing a model of excited BH somewhat similar to the historical semi-classical Bohr model of the structure of a hydrogen atom. Here we explicitly write down, for the same model, a time dependent Schrödinger equation for the system composed by Hawking radiation and BH QNMs. The physical state and the correspondent wave function are written in terms of a unitary evolution matrix instead of a density matrix. Thus, the final state results to be a pure quantum state instead of a mixed one. Hence, Hawking’s claim is falsified because BHs result to be well defined quantum mechanical systems, having ordered, discrete quantum spectra, which respect ’t Hooft’s assumption that Schrödinger equations can be used universally for all dynamics in the universe. As a consequence, information comes out in BH evaporation in terms of pure states in a unitary time dependent evolution. In Section 4 of this paper we show that the present approach permits also to solve the entanglement problem connected with the information paradox.

  12. TIME Impact - a new user-friendly tuberculosis (TB) model to inform TB policy decisions.

    PubMed

    Houben, R M G J; Lalli, M; Sumner, T; Hamilton, M; Pedrazzoli, D; Bonsu, F; Hippner, P; Pillay, Y; Kimerling, M; Ahmedov, S; Pretorius, C; White, R G

    2016-01-01

    Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, predominantly affecting low- and middle-income countries (LMICs), where resources are limited. As such, countries need to be able to choose the most efficient interventions for their respective setting. Mathematical models can be valuable tools to inform rational policy decisions and improve resource allocation, but are often unavailable or inaccessible for LMICs, particularly in TB. We developed TIME Impact, a user-friendly TB model that enables local capacity building and strengthens country-specific policy discussions to inform support funding applications at the (sub-)national level (e.g. Ministry of Finance) or to international donors (e.g. the Global Fund to Fight AIDS, Tuberculosis and Malaria).TIME Impact is an epidemiological transmission model nested in TIME, a set of TB modelling tools available for free download within the widely-used Spectrum software. The TIME Impact model reflects key aspects of the natural history of TB, with additional structure for HIV/ART, drug resistance, treatment history and age. TIME Impact enables national TB programmes (NTPs) and other TB policymakers to better understand their own TB epidemic, plan their response, apply for funding and evaluate the implementation of the response.The explicit aim of TIME Impact's user-friendly interface is to enable training of local and international TB experts towards independent use. During application of TIME Impact, close involvement of the NTPs and other local partners also builds critical understanding of the modelling methods, assumptions and limitations inherent to modelling. This is essential to generate broad country-level ownership of the modelling data inputs and results. In turn, it stimulates discussions and a review of the current evidence and assumptions, strengthening the decision-making process in general.TIME Impact has been effectively applied in a variety of settings. In South Africa, it

  13. TIME Impact - a new user-friendly tuberculosis (TB) model to inform TB policy decisions.

    PubMed

    Houben, R M G J; Lalli, M; Sumner, T; Hamilton, M; Pedrazzoli, D; Bonsu, F; Hippner, P; Pillay, Y; Kimerling, M; Ahmedov, S; Pretorius, C; White, R G

    2016-03-24

    Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, predominantly affecting low- and middle-income countries (LMICs), where resources are limited. As such, countries need to be able to choose the most efficient interventions for their respective setting. Mathematical models can be valuable tools to inform rational policy decisions and improve resource allocation, but are often unavailable or inaccessible for LMICs, particularly in TB. We developed TIME Impact, a user-friendly TB model that enables local capacity building and strengthens country-specific policy discussions to inform support funding applications at the (sub-)national level (e.g. Ministry of Finance) or to international donors (e.g. the Global Fund to Fight AIDS, Tuberculosis and Malaria).TIME Impact is an epidemiological transmission model nested in TIME, a set of TB modelling tools available for free download within the widely-used Spectrum software. The TIME Impact model reflects key aspects of the natural history of TB, with additional structure for HIV/ART, drug resistance, treatment history and age. TIME Impact enables national TB programmes (NTPs) and other TB policymakers to better understand their own TB epidemic, plan their response, apply for funding and evaluate the implementation of the response.The explicit aim of TIME Impact's user-friendly interface is to enable training of local and international TB experts towards independent use. During application of TIME Impact, close involvement of the NTPs and other local partners also builds critical understanding of the modelling methods, assumptions and limitations inherent to modelling. This is essential to generate broad country-level ownership of the modelling data inputs and results. In turn, it stimulates discussions and a review of the current evidence and assumptions, strengthening the decision-making process in general.TIME Impact has been effectively applied in a variety of settings. In South Africa, it

  14. Evaluation of the occurrence and biodegradation of parabens and halogenated by-products in wastewater by accurate-mass liquid chromatography-quadrupole-time-of-flight-mass spectrometry (LC-QTOF-MS).

    PubMed

    González-Mariño, Iria; Quintana, José Benito; Rodríguez, Isaac; Cela, Rafael

    2011-12-15

    An assessment of the sewage occurrence and biodegradability of seven parabens and three halogenated derivatives of methyl paraben (MeP) is presented. Several wastewater samples were collected at three different wastewater treatment plants (WWTPs) during April and May 2010, concentrated by solid-phase extraction (SPE) and analysed by liquid chromatography-electrospray-quadrupole-time-of-flight mass spectrometry (LC-QTOF-MS). The performance of the QTOF system proved to be comparable to triple-quadrupole instruments in terms of quantitative capabilities, with good linearity (R(2) > 0.99 in the 5-500 ng mL(-1) range), repeatability (RSD < 5.6%) and LODs (0.3-4.0 ng L(-1) after SPE). MeP and n-propyl paraben (n-PrP) were the most frequently detected and the most abundant analytes in raw wastewater (0.3-10 μg L(-1)), in accordance with the data displayed in the bibliography and reflecting their wider use in cosmetic formulations. Samples were also evaluated in search for potential halogenated by-products of parabens, formed as a result of their reaction with residual chlorine contained in tap water. Monochloro- and dichloro-methyl paraben (ClMeP and Cl(2)MeP) were found and quantified in raw wastewater at levels between 0.01 and 0.1 μg L(-1). Halogenated derivatives of n-PrP could not be quantified due to the lack of standards; nevertheless, the monochlorinated species (ClPrP) was identified in several samples from its accurate precursor and product ions mass/charge ratios (m/z). Removal efficiencies of parabens and MeP chlorinated by-products in WWTPs exceeded 90%, with the lowest percentages corresponding to the latter species. This trend was confirmed by an activated sludge biodegradation batch test, where non-halogenated parabens had half-lives lower than 4 days, whereas halogenated derivatives of MeP turned out to be more persistent, with up to 10 days of half-life in the case of dihalogenated derivatives. A further stability test performed with raw wastewater

  15. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  16. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  17. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  18. "Right Time, Right Place" Health Communication on Twitter: Value and Accuracy of Location Information

    PubMed Central

    Burton, Scott H; Tanner, Kesler W; West, Joshua H; Barnes, Michael D

    2012-01-01

    Background Twitter provides various types of location data, including exact Global Positioning System (GPS) coordinates, which could be used for infoveillance and infodemiology (ie, the study and monitoring of online health information), health communication, and interventions. Despite its potential, Twitter location information is not well understood or well documented, limiting its public health utility. Objective The objective of this study was to document and describe the various types of location information available in Twitter. The different types of location data that can be ascertained from Twitter users are described. This information is key to informing future research on the availability, usability, and limitations of such location data. Methods Location data was gathered directly from Twitter using its application programming interface (API). The maximum tweets allowed by Twitter were gathered (1% of the total tweets) over 2 separate weeks in October and November 2011. The final dataset consisted of 23.8 million tweets from 9.5 million unique users. Frequencies for each of the location options were calculated to determine the prevalence of the various location data options by region of the world, time zone, and state within the United States. Data from the US Census Bureau were also compiled to determine population proportions in each state, and Pearson correlation coefficients were used to compare each state’s population with the number of Twitter users who enable the GPS location option. Results The GPS location data could be ascertained for 2.02% of tweets and 2.70% of unique users. Using a simple text-matching approach, 17.13% of user profiles in the 4 continental US time zones were able to be used to determine the user’s city and state. Agreement between GPS data and data from the text-matching approach was high (87.69%). Furthermore, there was a significant correlation between the number of Twitter users per state and the 2010 US Census state

  19. End-User Applications of Real-Time Earthquake Information in Europe

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Gasparini, P.; Giardini, D.; Zschau, J.; Filangieri, A. R.; Reakt Wp7 Team

    2011-12-01

    The primary objective of European FP7 project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction) is to improve the efficiency of real-time earthquake risk mitigation methods and their capability of protecting structures, infrastructures, and populations. REAKT aims to address the issues of real-time earthquake hazard and response from end-to-end, with efforts directed along the full spectrum of methodology development in earthquake forecasting, earthquake early warning, and real-time vulnerability systems, through optimal decision-making, and engagement and cooperation of scientists and end users for the establishment of best practices for use of real-time information. Twelve strategic test cases/end users throughout Europe have been selected. This diverse group of applications/end users includes civil protection authorities, railway systems, hospitals, schools, industrial complexes, nuclear plants, lifeline systems, national seismic networks, and critical structures. The scale of target applications covers a wide range, from two school complexes in Naples, to individual critical structures, such as the Rion Antirion bridge in Patras, and the Fatih Sultan Mehmet bridge in Istanbul, to large complexes, such as the SINES industrial complex in Portugal and the Thessaloniki port area, to distributed lifeline and transportation networks and nuclear plants. Some end-users are interested in in-depth feasibility studies for use of real-time information and development of rapid response plans, while others intend to install real-time instrumentation and develop customized automated control systems. From the onset, REAKT scientists and end-users will work together on concept development and initial implementation efforts using the data products and decision-making methodologies developed with the goal of improving end-user risk mitigation. The aim of this scientific/end-user partnership is to ensure that scientific efforts are applicable to operational

  20. Information mining over heterogeneous and high-dimensional time-series data in clinical trials databases.

    PubMed

    Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C

    2006-04-01

    An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.

  1. Value of information of repair times for offshore wind farm maintenance planning

    NASA Astrophysics Data System (ADS)

    Seyr, Helene; Muskulus, Michael

    2016-09-01

    A large contribution to the total cost of energy in offshore wind farms is due to maintenance costs. In recent years research has focused therefore on lowering the maintenance costs using different approaches. Decision support models for scheduling the maintenance exist already, dealing with different factors influencing the scheduling. Our contribution deals with the uncertainty in the repair times. Given the mean repair times for different turbine components we make some assumptions regarding the underlying repair time distribution. We compare the results of a decision support model for the mean times to repair and those repair time distributions. Additionally, distributions with the same mean but different variances are compared under the same conditions. The value of lowering the uncertainty in the repair time is calculated and we find that using distributions significantly decreases the availability, when scheduling maintenance for multiple turbines in a wind park. Having detailed information about the repair time distribution may influence the results of maintenance modeling and might help identify cost factors.

  2. Representing Response-Time Information in Item Banks. Law School Admission Council Computerized Testing Report. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Schnipke, Deborah L.; Scrams, David J.

    The availability of item response times made possible by computerized testing represents an entirely new type of information about test items. This study explores the issue of how to represent response-time information in item banks. Empirical response-time distribution functions can be fit with statistical distribution functions with known…

  3. Will spin-relaxation times in molecular magnets permit quantum information processing?

    NASA Astrophysics Data System (ADS)

    Ardavan, Arzhang

    2007-03-01

    Certain computational tasks can be efficiently implemented using quantum logic, in which the information-carrying elements are permitted to exist in quantum superpositions. To achieve this in practice, a physical system that is suitable for embodying quantum bits (qubits) must be identified. Some proposed scenarios employ electron spins in the solid state, for example phosphorous donors in silicon, quantum dots, heterostructures and endohedral fullerenes, motivated by the long electron-spin relaxation times exhibited by these systems. An alternative electron-spin based proposal exploits the large number of quantum states and the non-degenerate transitions available in high spin molecular magnets. Although these advantages have stimulated vigorous research in molecular magnets, the key question of whether the intrinsic spin relaxation times are long enough has hitherto remained unaddressed. Using X-band pulsed electron spin resonance, we measure the intrinsic spin-lattice (T1) and phase coherence (T2) relaxation times in molecular nanomagnets for the first time. In Cr7M heterometallic wheels, with M = Ni and Mn, phase coherence relaxation is dominated by the coupling of the electron spin to protons within the molecule. In deuterated samples T2 reaches 3 μs at low temperatures, which is several orders of magnitude longer than the duration of spin manipulations, satisfying a prerequisite for the deployment of molecular nanomagnets in quantum information applications.

  4. Measuring time-varying information flow in scalp EEG signals: orthogonalized partial directed coherence.

    PubMed

    Omidvarnia, Amir; Azemi, Ghasem; Boashash, Boualem; O'Toole, John M; Colditz, Paul B; Vanhatalo, Sampsa

    2014-03-01

    This study aimed to develop a time-frequency method for measuring directional interactions over time and frequency from scalp-recorded electroencephalographic (EEG) signals in a way that is less affected by volume conduction and amplitude scaling. We modified the time-varying generalized partial directed coherence (tv-gPDC) method, by orthogonalization of the strictly causal multivariate autoregressive model coefficients, to minimize the effect of mutual sources. The novel measure, generalized orthogonalized PDC (gOPDC), was tested first using two simulated models with feature dimensions relevant to EEG activities. We then used the method for assessing event-related directional information flow from flash-evoked responses in neonatal EEG. For testing statistical significance of the findings, we followed a thresholding procedure driven by baseline periods in the same EEG activity. The results suggest that the gOPDC method 1) is able to remove common components akin to volume conduction effect in the scalp EEG, 2) handles the potential challenge with different amplitude scaling within multichannel signals, and 3) can detect directed information flow within a subsecond time scale in nonstationary multichannel EEG datasets. This method holds promise for estimating directed interactions between scalp EEG channels that are commonly affected by the confounding impact of mutual cortical sources.

  5. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  6. Using Concurrent Cardiovascular Information to Augment Survival Time Data from Orthostatic Tilt Tests

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Fiedler, James; Lee, Stuart M. M.; Westby, Christian M.; Stenger, Michael B.; Platts, Steven H.

    2014-01-01

    Orthostatic Intolerance (OI) is the propensity to develop symptoms of fainting during upright standing. OI is associated with changes in heart rate, blood pressure and other measures of cardiac function. Problem: NASA astronauts have shown increased susceptibility to OI on return from space missions. Current methods for counteracting OI in astronauts include fluid loading and the use of compression garments. Multivariate trajectory spread is greater as OI increases. Pairwise comparisons at the same time within subjects allows incorporation of pass/fail outcomes. Path length, convex hull area, and covariance matrix determinant do well as statistics to summarize this spread Missing data problems Time series analysis need many more time points per OTT session treatment of trend? how incorporate survival information?

  7. Geographic information systems for real-time environmental sensing at multiple scales

    NASA Astrophysics Data System (ADS)

    Esswein, Samuel Thomas

    The purpose of this investigation was to design, implement, and apply a real-time geographic information system for data intensive water resource research and management. The research presented is part of an ongoing, interdisciplinary research program supporting the development of the Intelligent River ® observation instrument. The objectives of this research were to 1) design and describe software architecture for a streaming environmental sensing information system, 2) implement and evaluate the proposed information system, and 3) apply the information system for monitoring, analysis, and visualization of an urban stormwater improvement project located in the City of Aiken, South Carolina, USA. This research contributes to the fields of software architecture and urban ecohydrology. The first contribution is a formal architectural description of a streaming environmental sensing information system. This research demonstrates the operation of the information system and provides a reference point for future software implementations. Contributions to urban ecohydrology are in three areas. First, a characterization of soil properties for the study region of the City of Aiken, SC is provided. The analysis includes an evaluation of spatial structure for soil hydrologic properties. Findings indicate no detectable structure at the scales explored during the study. The second contribution to ecohydrology comes from a long-term, continuous monitoring program for bioinfiltration basin structures located in the study area. Results include an analysis of soil moisture dynamics based on data collected at multiple depths with high spatial and temporal resolution. A novel metric is introduced to evaluate the long-term performance of bioinfiltration basin structures based on soil moisture observation data. Findings indicate a decrease in basin performance over time for the monitored sites. The third contribution to the field of ecohydrology is the development and application of a

  8. Time-resolved X-ray PIV measurements of hemodynamic information of real pulsatile blood flows

    NASA Astrophysics Data System (ADS)

    Park, Hanwook; Yeom, Eunseop; Lee, Sang Joon

    2015-11-01

    X-ray imaging technique has been used to visualize various bio-fluid flow phenomena as a nondestructive manner. To obtain hemodynamic information related with circulatory vascular diseases, a time-resolved X-ray PIV technique with high temporal resolution was developed. In this study, to embody actual pulsatile blood flows in a circular conduit without changes in hemorheological properties, a bypass loop is established by connecting a microtube between the jugular vein and femoral artery of a rat. Biocompatible CO2 microbubbles are used as tracer particles. After mixing with whole blood, CO2 microbubbles are injected into the bypass loop. Particle images of the pulsatile blood flows in the bypass loop are consecutively captured by the time-resolved X-ray PIV system. The velocity field information are obtained with varying flow rate and pulsataility. To verify the feasibility of the use of CO2 microbubbles under in vivo conditions, the effects of the surrounding-tissues are also investigated, because these effects are crucial for deteriorating the image contrast of CO2 microbubbles. Therefore, the velocity information of blood flows in the abdominal aorta are obtained to demonstrate the visibility and usefulness of CO2 microbubbles under ex vivo conditions. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2008-0061991).

  9. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs. PMID:26141269

  10. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs.

  11. Comprehensive temporal information detection from clinical text: medical events, time, and TLINK identification

    PubMed Central

    Sohn, Sunghwan; Wagholikar, Kavishwar B; Li, Dingcheng; Jonnalagadda, Siddhartha R; Tao, Cui; Komandur Elayavilli, Ravikumar; Liu, Hongfang

    2013-01-01

    Background Temporal information detection systems have been developed by the Mayo Clinic for the 2012 i2b2 Natural Language Processing Challenge. Objective To construct automated systems for EVENT/TIMEX3 extraction and temporal link (TLINK) identification from clinical text. Materials and methods The i2b2 organizers provided 190 annotated discharge summaries as the training set and 120 discharge summaries as the test set. Our Event system used a conditional random field classifier with a variety of features including lexical information, natural language elements, and medical ontology. The TIMEX3 system employed a rule-based method using regular expression pattern match and systematic reasoning to determine normalized values. The TLINK system employed both rule-based reasoning and machine learning. All three systems were built in an Apache Unstructured Information Management Architecture framework. Results Our TIMEX3 system performed the best (F-measure of 0.900, value accuracy 0.731) among the challenge teams. The Event system produced an F-measure of 0.870, and the TLINK system an F-measure of 0.537. Conclusions Our TIMEX3 system demonstrated good capability of regular expression rules to extract and normalize time information. Event and TLINK machine learning systems required well-defined feature sets to perform well. We could also leverage expert knowledge as part of the machine learning features to further improve TLINK identification performance. PMID:23558168

  12. Whisper: Tracing the Spatiotemporal Process of Information Diffusion in Real Time.

    PubMed

    Cao, Nan; Lin, Yu-Ru; Sun, Xiaohua; Lazer, D; Liu, Shixia; Qu, Huamin

    2012-12-01

    When and where is an idea dispersed? Social media, like Twitter, has been increasingly used for exchanging information, opinions and emotions about events that are happening across the world. Here we propose a novel visualization design, "Whisper", for tracing the process of information diffusion in social media in real time. Our design highlights three major characteristics of diffusion processes in social media: the temporal trend, social-spatial extent, and community response of a topic of interest. Such social, spatiotemporal processes are conveyed based on a sunflower metaphor whose seeds are often dispersed far away. In Whisper, we summarize the collective responses of communities on a given topic based on how tweets were retweeted by groups of users, through representing the sentiments extracted from the tweets, and tracing the pathways of retweets on a spatial hierarchical layout. We use an efficient flux line-drawing algorithm to trace multiple pathways so the temporal and spatial patterns can be identified even for a bursty event. A focused diffusion series highlights key roles such as opinion leaders in the diffusion process. We demonstrate how our design facilitates the understanding of when and where a piece of information is dispersed and what are the social responses of the crowd, for large-scale events including political campaigns and natural disasters. Initial feedback from domain experts suggests promising use for today's information consumption and dispersion in the wild.

  13. Timing matters! The neural signature of intuitive judgments differs according to the way information is presented.

    PubMed

    Horr, Ninja K; Braun, Christoph; Zander, Thea; Volz, Kirsten G

    2015-12-15

    One can conceive of intuition as the preliminary perception of coherence. Since this requires holistic perception, it is hypothesized that underlying processing strategies are dependent on the possibility to obtain all relevant information at once. The present study used magnetoencephalography (MEG) to investigate neural mechanisms underlying intuitive coherence perception when semantic concepts are presented all together (simultaneously) or one after the other (sequentially). With simultaneous presentation, absolute activation increases in the left OFC when participants recognize coherence. With sequential presentation activation increases in the right OFC when participants conclude that there is no common associate between the words presented. Behavioral performance was similar in the two experiments. These results demonstrate that the way information is revealed over time changes the processing of intuitive coherence perception. We propose that such changes must be taken into account to disentangle the neural and behavioral mechanisms underlying different accounts of intuition and related phenomena. PMID:26529680

  14. Real time information from bedside monitors as part of a web-based patient record.

    PubMed

    Tachinardi, U; de Sà Rebelo, M; de Magalhães Oliveira, P P; Pilon, P E

    2001-01-01

    Traditional paper-based Medical Records, and even most of their digital counterparts, represent historical patient information. On the other hand new generations of Point-of-Care devices can be connected to standard networks and deliver streams of real time data through an Intranet, or even the Internet. Vital signs provided by IP-based devices can then be viewed at remote stations. Merging both worlds, real time and historical, in the pursuit of a comprehensive EPR is the main challenge of the present project. The basic infra-structure is composed of three main components: an existing Web-based EPR viewing station1 (Web-EPR); a fully integrated HIS/PACS system1; and a monitoring network (Siemens Infinity Network 2). Communication between the components was obtained by developing interfaces based on both HL7 and Siemens protocols the later only for waveforms. For the graphical display a web-browser-based application of the streamed signals was developed and integrated into the existing Web-EPR. This addition expanded the Web-EPR capabilities providing means to include real time signals and calculated parameters on the set of information already available. Some extra features of this project include: one-way SMS messaging of the parameters, interactive WAP access and a DICOM compliant storage of signal waveforms.

  15. Brain response during the M170 time interval is sensitive to socially relevant information.

    PubMed

    Arviv, Oshrit; Goldstein, Abraham; Weeting, Janine C; Becker, Eni S; Lange, Wolf-Gero; Gilboa-Schechtman, Eva

    2015-11-01

    Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system.

  16. Time-Delayed Mutual Information of the Phase as a Measure of Functional Connectivity

    PubMed Central

    Wilmer, Andreas; de Lussanet, Marc; Lappe, Markus

    2012-01-01

    We propose a time-delayed mutual information of the phase for detecting nonlinear synchronization in electrophysiological data such as MEG. Palus already introduced the mutual information as a measure of synchronization [1]. To obtain estimates on small data-sets as reliably as possible, we adopt the numerical implementation as proposed by Kraskov and colleagues [2]. An embedding with a parametric time-delay allows a reconstruction of arbitrary nonstationary connective structures – so-called connectivity patterns – in a wide class of systems such as coupled oscillatory or even purely stochastic driven processes [3]. By using this method we do not need to make any assumptions about coupling directions, delay times, temporal dynamics, nonlinearities or underlying mechanisms. For verifying and refining the methods we generate synthetic data-sets by a mutual amplitude coupled network of Rössler oscillators with an a-priori known connective structure. This network is modified in such a way, that the power-spectrum forms a power law, which is also observed in electrophysiological recordings. The functional connectivity measure is tested on robustness to additive uncorrelated noise and in discrimination of linear mixed input data. For the latter issue a suitable de-correlation technique is applied. Furthermore, the compatibility to inverse methods for a source reconstruction in MEG such as beamforming techniques is controlled by dedicated dipole simulations. Finally, the method is applied on an experimental MEG recording. PMID:23028571

  17. Analysing the information flow between financial time series . An improved estimator for transfer entropy

    NASA Astrophysics Data System (ADS)

    Marschinski, R.; Kantz, H.

    2002-11-01

    Following the recently introduced concept of transfer entropy, we attempt to measure the information flow between two financial time series, the Dow Jones and DAX stock index. Being based on Shannon entropies, this model-free approach in principle allows us to detect statistical dependencies of all types, i.e. linear and nonlinear temporal correlations. However, when available data is limited and the expected effect is rather small, a straightforward implementation suffers badly from misestimation due to finite sample effects, making it basically impossible to assess the significance of the obtained values. We therefore introduce a modified estimator, called effective transfer entropy, which leads to improved results in such conditions. In the application, we then manage to confirm an information transfer on a time scale of one minute between the two financial time series. The different economic impact of the two indices is also recovered from the data. Numerical results are then interpreted on one hand as capability of one index to explain future observations of the other, and on the other hand within terms of coupling strengths in the framework of a bivariate autoregressive stochastic model. Evidence is given for a nonlinear character of the coupling between Dow Jones and DAX.

  18. Brain response during the M170 time interval is sensitive to socially relevant information.

    PubMed

    Arviv, Oshrit; Goldstein, Abraham; Weeting, Janine C; Becker, Eni S; Lange, Wolf-Gero; Gilboa-Schechtman, Eva

    2015-11-01

    Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system. PMID:26423664

  19. Development and Implementation of Real-Time Information Delivery Systems for Emergency Management

    NASA Technical Reports Server (NTRS)

    Wegener, Steve; Sullivan, Don; Ambrosia, Vince; Brass, James; Dann, R. Scott

    2000-01-01

    The disaster management community has an on-going need for real-time data and information, especially during catastrophic events. Currently, twin engine or jet aircraft with limited altitude and duration capabilities collect much of the data. Flight safety is also an issue. Clearly, much of the needed data could be delivered via over-the-horizon transfer through a uninhabited aerial vehicles (UAV) platform to mission managers at various locations on the ground. In fact, because of the ability to stay aloft for long periods of time, and to fly above dangerous situations, UAV's are ideally suited for disaster missions. There are numerous situations that can be considered disastrous for the human population. Some, such as fire or flood, can continue over a period of days. Disaster management officials rely on data from the site to respond in an optimum way with warnings, evacuations, rescue, relief, and to the extent possible, damage control. Although different types of disasters call for different types of response, most situations can be improved by having visual images and other remotely sensed data available. "Disaster Management" is actually made up of a number of activities, including: - Disaster Prevention and Mitigation - Emergency Response Planning - Disaster Management (real-time deployment of resources, during an event) - Disaster / Risk Modeling All of these activities could benefit from real-time information, but a major focus for UAV-based technology is in real-time deployment of resources (i.e., emergency response teams), based on changing conditions at the location of the event. With all these potential benefits, it is desirable to demonstrate to user agencies the ability to perform disaster management missions as described. The following demonstration project is the first in a program designed to prove the feasibility of supporting disaster missions with UAV technology and suitable communications packages on-board. A several-year program is envisioned

  20. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  1. Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids

    PubMed Central

    Rodriguez, Francisco A.; Portnuff, Cory D. F.; Goupell, Matthew J.; Tollin, Daniel J.

    2016-01-01

    In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. PMID:27698258

  2. The Utility of the Real-Time NASA Land Information System Data for Drought Monitoring Applications

    NASA Technical Reports Server (NTRS)

    White, Kristopher D.; Case, Jonathan L.

    2013-01-01

    Measurements of soil moisture are a crucial component for the proper monitoring of drought conditions. The large spatial variability of soil moisture complicates the problem. Unfortunately, in situ soil moisture observing networks typically consist of sparse point observations, and conventional numerical model analyses of soil moisture used to diagnose drought are of coarse spatial resolution. Decision support systems such as the U.S. Drought Monitor contain drought impact resolution on sub-county scales, which may not be supported by the existing soil moisture networks or analyses. The NASA Land Information System, which is run with 3 km grid spacing over the eastern United States, has demonstrated utility for monitoring soil moisture. Some of the more useful output fields from the Land Information System are volumetric soil moisture in the 0-10 cm and 40-100 cm layers, column-integrated relative soil moisture, and the real-time green vegetation fraction derived from MODIS (Moderate Resolution Imaging Spectroradiometer) swath data that are run within the Land Information System in place of the monthly climatological vegetation fraction. While these and other variables have primarily been used in local weather models and other operational forecasting applications at National Weather Service offices, the use of the Land Information System for drought monitoring has demonstrated utility for feedback to the Drought Monitor. Output from the Land Information System is currently being used at NWS Huntsville to assess soil moisture, and to provide input to the Drought Monitor. Since feedback to the Drought Monitor takes place on a weekly basis, weekly difference plots of column-integrated relative soil moisture are being produced by the NASA Short-term Prediction Research and Transition Center and analyzed to facilitate the process. In addition to the Drought Monitor, these data are used to assess drought conditions for monthly feedback to the Alabama Drought Monitoring

  3. Dynamics in two-elevator traffic system with real-time information

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    2013-12-01

    We study the dynamics of traffic system with two elevators using a elevator choice scenario. The two-elevator traffic system with real-time information is similar to the two-route vehicular traffic system. The dynamics of two-elevator traffic system is described by the two-dimensional nonlinear map. An elevator runs a neck-and-neck race with another elevator. The motion of two elevators displays such a complex behavior as quasi-periodic one. The return map of two-dimensional map shows a piecewise map.

  4. Stabilization of the stochastically forced equilibria for nonlinear discrete-time systems with incomplete information

    SciTech Connect

    Ryashko, Lev

    2015-11-30

    A stabilization problem of the equilibrium of stochastically forced nonlinear discrete-time system with incomplete information is considered. Our approach uses a regulator which synthesizes the required stochastic sensitivity of the equilibrium. Mathematically, this problem is reduced to the solution of some quadratic matrix equations. A description of attainability sets and algorithms for regulators design is given. The general results are applied to the suppression of unwanted large-amplitude oscillations around the equilibria of the stochastically forced Verhulst model with noisy observations.

  5. Information content of nonautonomous free fields in curved space-time

    SciTech Connect

    Parreira, J. E.; Nemes, M. C.; Fonseca-Romero, K. M.

    2011-03-15

    We show that it is possible to quantify the information content of a nonautonomous free field state in curved space-time. A covariance matrix is defined and it is shown that, for symmetric Gaussian field states, the matrix is connected to the entropy of the state. This connection is maintained throughout a quadratic nonautonomous (including possible phase transitions) evolution. Although particle-antiparticle correlations are dynamically generated, the evolution is isoentropic. If the current standard cosmological model for the inflationary period is correct, in absence of decoherence such correlations will be preserved, and could potentially lead to observable effects, allowing for a test of the model.

  6. From the River to You: USGS Real-Time Streamflow Information...from the National Streamflow Information Program

    USGS Publications Warehouse

    Nielsen, Joseph P.; Norris, J. Michael

    2007-01-01

    This Fact Sheet is one in a series that highlights information or recent research findings from the USGS National Streamflow Information Program (NSIP). The investigations and scientific results reported in this series require a nationally consistent streamgaging network with stable long-term monitoring sites and a rigorous program of data, quality assurance, management, archiving, and synthesis. NSIP produces multipurpose, unbiased surface-water information that is readily accessible to all.

  7. Investigating the inner time properties of seismograms by using the Fisher Information Measure

    NASA Astrophysics Data System (ADS)

    Telesca, Luciano; Lovallo, Michele; Alcaz, Vasile; Ilies, Ion

    2014-09-01

    The time dynamics of seismograms of nine tectonic earthquakes which occurred in Vrancea (Romania) registered at three seismic stations located in Moldova are analyzed by means of the informational approach of the Fisher Information Measure (FIM). The three seismic stations in Moldova are located, two (MILM and LEOM) within an area with high seismic hazard, while the third (SORM) in a less hazardous region. Our findings point out to a clear discrimination of the two stations MILM and LEOM from SORM on the basis of the informational properties of the recorded seismograms corresponding to the same earthquakes. In particular it is found that larger distance and lower azimuth characterize seismograms with lower FIM, which implies lower organization and higher disorder in seismograms recorded by SORM with respect to those recorded by MILM and LEOM. The lower FIM revealed by seismograms recorded by SORM could be put in relationship with the lower degree of seismic hazard in the area where the seismic station is installed.

  8. Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.

    PubMed

    Novich, Scott D; Eagleman, David M

    2015-10-01

    Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which

  9. A real time pipeline to link meteorological information and TGFs detected by AGILE

    NASA Astrophysics Data System (ADS)

    Ursi, Alessandro; Tavani, Marco; Dietrich, Stefano; Marisaldi, Martino; Casella, Daniele; Sanò, Paolo; Petracca, Marco; Argan, Andrea

    2015-04-01

    Terrestrial Gamma-ray Flashes (TGFs) are brief (time" meteorological information about the detected TGFs. We take advantage of the Meteosat Second Generation (MSG) satellites data to promptly identify the possible individual thunderstorm or mesoscale convective system associated to the detected TGF event and to follow its evolution in space and time. Data from other meteorological satellites, for example the GPM mission, as well as ground measurements from lightning detection network, can be integrated in the pipeline. This allows us a prompt characterization of the ground meteorological conditions at TGF time which will provide instrument independent trigger validation, fill in a database for subsequent statistical analysis, and eventually, on a longer term perspective, serve as a real time alert system open to the community.

  10. Semiparametric Estimation of Treatment Effect with Time-Lagged Response in the Presence of Informative Censoring

    PubMed Central

    Lu, Xiaomin; Tsiatis, Anastasios A.

    2011-01-01

    In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates. PMID:21706378

  11. Semiparametric estimation of treatment effect with time-lagged response in the presence of informative censoring.

    PubMed

    Lu, Xiaomin; Tsiatis, Anastasios A

    2011-10-01

    In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient's response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates. PMID:21706378

  12. Dark matter vs. neutrinos: the effect of astrophysical uncertainties and timing information on the neutrino floor

    SciTech Connect

    Davis, Jonathan H.

    2015-03-09

    Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.

  13. The Development and Evaluation of the Climate Time Line Information Tool

    NASA Astrophysics Data System (ADS)

    McCaffrey, M. S.; Kowal, D.; Eakin, C. M.

    2002-12-01

    The Climate Time Line Information Tool or CTL (http://www.ngdc.noaa.gov/paleo/ctl) has been prototyped as a digital educational tool for conveying fundamental climatic processes and their human dimension for diverse audiences. Using a powers of ten approach to temporal scaling, the CTL website was developed through a CIRES Innovative Research Grant by Mark McCaffrey at the National Climatic Data Center's Paleoclimatology Program and Dan Kowal at the National Geophysical Data Center. CTL was specifcally designed as an interdisciplinary tool for conveying information about weather and climatic processes, such as the diurnal, annual and orbital cycles and ENSO. Moreover, the web site explores potential connections between climatic variability and human development over the past 100,000 years. Evaluation of the prototype examined issues of usability and navigation of the site as well as how its content and framework served the needs of undergraduate, middle and high school students, geoscience educators, and climate experts. The development and evaluation of the Climate Time Line provide a case study for other geoscience researchers and educators on: i) how objectives were set by developers; ii) how evaluators were involved in assessing the prototype; iii) the variety of evaluative methods available to test the viability of the product; and iv) how results from the evaluation can be used to finalize the prototype.

  14. Mapping emotions through time: how affective trajectories inform the language of emotion.

    PubMed

    Kirkland, Tabitha; Cunningham, William A

    2012-04-01

    The words used to describe emotions can provide insight into the basic processes that contribute to emotional experience. We propose that emotions arise partly from interacting evaluations of one's current affective state, previous affective state, predictions for how these may change in the future, and the experienced outcomes following these predictions. These states can be represented and inferred from neural systems that encode shifts in outcomes and make predictions. In two studies, we demonstrate that emotion labels are reliably differentiated from one another using only simple cues about these affective trajectories through time. For example, when a worse-than-expected outcome follows the prediction that something good will happen, that situation is labeled as causing anger, whereas when a worse-than-expected outcome follows the prediction that something bad will happen, that situation is labeled as causing sadness. Emotion categories are more differentiated when participants are required to think categorically than when participants have the option to consider multiple emotions and degrees of emotions. This work indicates that information about affective movement through time and changes in affective trajectory may be a fundamental aspect of emotion categories. Future studies of emotion must account for the dynamic way that we absorb and process information.

  15. Dark matter vs. neutrinos: the effect of astrophysical uncertainties and timing information on the neutrino floor

    SciTech Connect

    Davis, Jonathan H.

    2015-03-01

    Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.

  16. The effect of viewing time, time to encounter, and practice on perception of aircraft separation on a cockpit display of traffic information

    NASA Technical Reports Server (NTRS)

    Oconnor, S.; Palmer, E. A.; Baty, D.; Jago, S.

    1980-01-01

    The concept of a cockpit display of traffic information (CDTI) includes the integration of air traffic, navigation, and other pertinent information in a single electronic display in the cockpit. Two studies were conducted to develop a clear and concise display format for use in later full-mission simulator evaluations of the CDTI concept. Subjects were required to monitor a CDTI for specified periods of time and to make perceptual judgments concerning the future position of a single intruder aircraft in relationship to their own aircraft. Experimental variables included: type of predictor information displayed on the two aircraft symbols; time to encounter point; length of time subjects viewed the display; amount of practice; and type of encounter (straight or turning). Results show that length of viewing time had little or no effect on performance; time to encounter influenced performance with the straight predictor but did not with the curved predictor; and that learning occurred under all conditions.

  17. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  18. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  19. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  20. Non-targeted analysis of electronics waste by comprehensive two-dimensional gas chromatography combined with high-resolution mass spectrometry: Using accurate mass information and mass defect analysis to explore the data.

    PubMed

    Ubukata, Masaaki; Jobst, Karl J; Reiner, Eric J; Reichenbach, Stephen E; Tao, Qingping; Hang, Jiliang; Wu, Zhanpin; Dane, A John; Cody, Robert B

    2015-05-22

    Comprehensive two-dimensional gas chromatography (GC×GC) and high-resolution mass spectrometry (HRMS) offer the best possible separation of their respective techniques. Recent commercialization of combined GC×GC-HRMS systems offers new possibilities for the analysis of complex mixtures. However, such experiments yield enormous data sets that require new informatics tools to facilitate the interpretation of the rich information content. This study reports on the analysis of dust obtained from an electronics recycling facility by using GC×GC in combination with a new high-resolution time-of-flight (TOF) mass spectrometer. New software tools for (non-traditional) Kendrick mass defect analysis were developed in this research and greatly aided in the identification of compounds containing chlorine and bromine, elements that feature in most persistent organic pollutants (POPs). In essence, the mass defect plot serves as a visual aid from which halogenated compounds are recognizable on the basis of their mass defect and isotope patterns. Mass chromatograms were generated based on specific ions identified in the plots as well as region of the plot predominantly occupied by halogenated contaminants. Tentative identification was aided by database searches, complementary electron-capture negative ionization experiments and elemental composition determinations from the exact mass data. These included known and emerging flame retardants, such as polybrominated diphenyl ethers (PBDEs), hexabromobenzene, tetrabromo bisphenol A and tris (1-chloro-2-propyl) phosphate (TCPP), as well as other legacy contaminants such as polychlorinated biphenyls (PCBs) and polychlorinated terphenyls (PCTs).

  1. Today and Tomorrow of the Real-time Earthquake Information Equipment

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.

    2003-12-01

    UrEDAS, Urgent Earthquake Detection and Alarm System, can realize the real-time early earthquake detection and alarm system in the world. Although this system is actually working for mostly railroad relations, such as the Shinkansen and subway lines, it is not the system limited to the railroad field. For example, there is a local government that has realized a tsunami warning system using real-time estimated earthquake parameters as magnitude and location, distributed by UrEDAS. UrEDAS is characterized by the serial processing without storage of seismic waveform for processing. For this reason, the procedure of data processing hardly changes with usual operation also in case of an earthquake, so the system does not carry out a system failure in case of an earthquake. And also UrEDAS does not require a network and is an autonomous distributed system strong against a natural disaster or cyber-terrorism. On 26 May 2003, the Sanriku-Minami earthquake of Mj 7.0 was occurred. It was so large that the maximum acceleration of about 600 Gal was observed along the Shinkansen line and 22 columns of the rigid frame viaducts (RC) were severely cracked. This earthquake occurred on the business hours of the Shinkansen. As expected, coastline _gCompact UrEDAS_h took out the early P wave alarm before the destructive earthquake motion and the validity of this system was proved for the first time. UrEDAS on the place where many faults exist has a problem in accuracy, especially for the epicentral azimuth. UrEDAS has been observing to consider on the situations of operation under such an unfavorable condition and tried to shorten the calculation time and improve the accuracy. On the other hand, UrEDAS has examined to distribute the earthquake information via Internet. At the time of Colima, Mexico earthquake on January 2003, UrEDAS in Mexico City detected this earthquake over one minute before the large motion and sent an information for persons concerned. The above systems are large

  2. On recovering distributed IP information from inductive source time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Kang, Seogi; Oldenburg, Douglas W.

    2016-10-01

    We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: (1) Obtain a 3-D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. (2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. (3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (1) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps

  3. On recovering distributed IP information from inductive source time domain electromagnetic data

    NASA Astrophysics Data System (ADS)

    Kang, Seogi; Oldenburg, Douglas W.

    2016-07-01

    We develop a procedure to invert time domain induced polarization (IP) data for inductive sources. Our approach is based upon the inversion methodology in conventional electrical IP (EIP), which uses a sensitivity function that is independent of time. However, significant modifications are required for inductive source IP (ISIP) because electric fields in the ground do not achieve a steady state. The time-history for these fields needs to be evaluated and then used to define approximate IP currents. The resultant data, either a magnetic field or its derivative, are evaluated through the Biot-Savart law. This forms the desired linear relationship between data and pseudo-chargeability. Our inversion procedure has three steps: 1) Obtain a 3D background conductivity model. We advocate, where possible, that this be obtained by inverting early-time data that do not suffer significantly from IP effects. 2) Decouple IP responses embedded in the observations by forward modelling the TEM data due to a background conductivity and subtracting these from the observations. 3) Use the linearized sensitivity function to invert data at each time channel and recover pseudo-chargeability. Post-interpretation of the recovered pseudo-chargeabilities at multiple times allows recovery of intrinsic Cole-Cole parameters such as time constant and chargeability. The procedure is applicable to all inductive source survey geometries but we focus upon airborne time domain EM (ATEM) data with a coincident-loop configuration because of the distinctive negative IP signal that is observed over a chargeable body. Several assumptions are adopted to generate our linearized modelling but we systematically test the capability and accuracy of the linearization for ISIP responses arising from different conductivity structures. On test examples we show: (a) our decoupling procedure enhances the ability to extract information about existence and location of chargeable targets directly from the data maps; (b

  4. Planck 2015 results. VII. High Frequency Instrument data processing: Time-ordered information and beams

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J. J.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    The Planck High Frequency Instrument (HFI) has observed the full sky at six frequencies (100, 143, 217, 353, 545, and 857 GHz) in intensity and at four frequencies in linear polarization (100, 143, 217, and 353 GHz). In order to obtain sky maps, the time-ordered information (TOI) containing the detector and pointing samples must be processed and the angular response must be assessed. The full mission TOI is included in the Planck 2015 release. This paper describes the HFI TOI and beam processing for the 2015 release. HFI calibration and map making are described in a companion paper. The main pipeline has been modified since the last release (2013 nominal mission in intensity only), by including a correction for the nonlinearity of the warm readout and by improving the model of the bolometer time response. The beam processing is an essential tool that derives the angular response used in all the Planck science papers and we report an improvement in the effective beam window function uncertainty of more than a factor of 10 relative to the2013 release. Noise correlations introduced by pipeline filtering function are assessed using dedicated simulations. Angular cross-power spectra using data sets that are decorrelated in time are immune to the main systematic effects.

  5. 30 CFR 48.3 - Training plans; time of submission; where filed; information required; time for approval; method...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... operator shall submit to the District Manager the following information: (1) The company name, mine name... provisions of sections 104 and 110 of the Act and other related regulations until that established deadline... shall submit a plan for approval by May 1, 2006, unless extended by MSHA. (2) In the case of a new...

  6. Real time explosive hazard information sensing, processing, and communication for autonomous operation

    SciTech Connect

    Versteeg, Roelof J.; Few, Douglas A.; Kinoshita, Robert A.; Johnson, Douglas; Linda, Ondrej

    2015-12-15

    Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.

  7. Real time explosive hazard information sensing, processing, and communication for autonomous operation

    DOEpatents

    Versteeg, Roelof J; Few, Douglas A; Kinoshita, Robert A; Johnson, Doug; Linda, Ondrej

    2015-02-24

    Methods, computer readable media, and apparatuses provide robotic explosive hazard detection. A robot intelligence kernel (RIK) includes a dynamic autonomy structure with two or more autonomy levels between operator intervention and robot initiative A mine sensor and processing module (ESPM) operating separately from the RIK perceives environmental variables indicative of a mine using subsurface perceptors. The ESPM processes mine information to determine a likelihood of a presence of a mine. A robot can autonomously modify behavior responsive to an indication of a detected mine. The behavior is modified between detection of mines, detailed scanning and characterization of the mine, developing mine indication parameters, and resuming detection. Real time messages are passed between the RIK and the ESPM. A combination of ESPM bound messages and RIK bound messages cause the robot platform to switch between modes including a calibration mode, the mine detection mode, and the mine characterization mode.

  8. The research of security and real-time based on the control information network

    NASA Astrophysics Data System (ADS)

    Su, Xiao-hui; Xu, Shu-Ping

    2013-03-01

    Aiming at the security issues, the construction of the remote motor control server based on the Web is studied, the scheme of two server programs based on the virtual server technology is proposed. The safety and application occasion of the scheme are described respectively, and the encryption algorithm which Improve certification accuracy is proposed. Aiming at the real-time problem, a new control method is proposed, which is based on neural network predictive control to solve the existing closed-loop network control system random delay. The experiment verification and system simulated are completed. These methods provide a new way to integrate the control network and information network, and to lay a solid foundation for the development of the control network.

  9. Confirming PICC tip position during insertion with real-time information.

    PubMed

    Barton, Andrew

    Peripherally inserted central catheters (PICCs) play a fundamental role in patient care in a variety of clinical and healthcare settings. Tip location is important for both safety and efficacy. New technologies may offer the possibility of safer, more efficient and more effective insertion. A prospective evaluation was carried out of a system providing real-time information on the tip’s location, direction, and depth during insertion in a total of 488 patients at a single centre (65 patients in the initial study, plus follow-on case series reports in 423 patients). No tip malpositions were reported and, as a result, the institution has been able to waive the requirement for confirmatory chest X-ray after PICC insertion, thus minimising the delay before the PICC can be used and increasing staff and patient confidence in the procedure.

  10. Confirming PICC tip position during insertion with real-time information.

    PubMed

    Barton, Andrew

    2016-01-27

    Peripherally inserted central catheters (PICCs) play a fundamental role in patient care in a variety of clinical and healthcare settings. Tip location is important for both safety and efficacy. New technologies may offer the possibility of safer, more efficient and more effective insertion. A prospective evaluation was carried out of a system providing real-time information on the tip's location, direction, and depth during insertion in a total of 488 patients at a single centre (65 patients in the initial study, plus follow-on case series reports in 423 patients). No tip malpositions were reported and, as a result, the institution has been able to waive the requirement for confirmatory chest X-ray after PICC insertion, thus minimising the delay before the PICC can be used and increasing staff and patient confidence in the procedure.

  11. Real time data acquisition: recommendations for the Medical Information Bus (MIB).

    PubMed

    Gardner, R M; Hawley, W L; East, T D; Oniki, T A; Young, H F

    Care of the acutely ill patient requires rapid acquisition, recording and communications of data. In the modern hospital it is not unusual for a patient to be connected to several monitoring and recording devices simultaneously. Each of these devices is typically made by a different manufacturer who may specialize in one sort of measurement, for example, pulse oximetry. Most of the modern monitoring and recording devices are micro-processor based and have communication capabilities. Unfortunately, there is no operable standard communication technology available from all devices. In addition different clinical staff (physicians, nurses, or respiratory therapists) may be responsible for collecting data. As a result there is a need to develop methods, standards, and strategies for timely and automatic collection of data from these monitoring and recording devices. We report on more than 5 years of clinical experience of automated ICU data collection using a prototype of the Medical Information Bus (MIB). PMID:1820414

  12. Real time data acquisition: experience with the Medical Information Bus (MIB).

    PubMed

    Gardner, R M; Hawley, W L; East, T D; Oniki, T A; Young, H F

    1991-01-01

    Care of the acutely ill patient requires rapid acquisition, recording and communications of data. In the modern hospital it is not unusual for a patient to be connected to several monitoring and recording devices simultaneously. Each of these devices is typically made by a different manufacturer who may specialize in one sort of measurement, for example, pulse oximetry. Most of the modern monitoring and recording devices are micro-processor based and have communications capabilities. Unfortunately, there is no operable standard communications technology available from all devices. In addition different clinical staff (physicians, nurses, or respiratory therapists) may be responsible for collecting data. As a result there is a need to develop methods, standards, and strategies for timely and automatic collection of data from these monitoring and recording devices. We report on more than 5 years of clinical experience of automated ICU data collection using a prototype of the Medical Information Bus (MIB). PMID:1807719

  13. Attractor reconstruction from the time series of information entropy of seismic kinetics process

    NASA Astrophysics Data System (ADS)

    Stakhovsky, I. R.

    2016-09-01

    The attractor is reconstructed from the time series of the information entropy of the seismic kinetics process. It is shown that the seismic kinetics process is governed by three order parameters and is characterized by a strange attractor in the three-dimensional phase space. The D q-spectrum of the multifractal measure induced by the attractor, which describes the topological structure of the latter, is obtained. The monofractal dimension of the attractor is D q(0) = 2.31…, and the correlation dimension is D q(2) = 2.16…. The estimate of the largest Lyapunov exponent of the attractor λ1 = 0.331…. The positive signature of the largest Lyapunov exponent suggests that the attractor is chaotic and the behavior of the phase trajectory is unpredictable.

  14. Face Adaptation Effects: Reviewing the Impact of Adapting Information, Time, and Transfer

    PubMed Central

    Strobach, Tilo; Carbon, Claus-Christian

    2013-01-01

    The ability to adapt is essential to live and survive in an ever-changing environment such as the human ecosystem. Here we review the literature on adaptation effects of face stimuli to give an overview of existing findings in this area, highlight gaps in its research literature, initiate new directions in face adaptation research, and help to design future adaptation studies. Furthermore, this review should lead to better understanding of the processing characteristics as well as the mental representations of face-relevant information. The review systematizes studies at a behavioral level in respect of a framework which includes three dimensions representing the major characteristics of studies in this field of research. These dimensions comprise (1) the specificity of adapting face information, e.g., identity, gender, or age aspects of the material to be adapted to (2) aspects of timing (e.g., the sustainability of adaptation effects) and (3) transfer relations between face images presented during adaptation and adaptation tests (e.g., images of the same or different identities). The review concludes with options for how to combine findings across different dimensions to demonstrate the relevance of our framework for future studies. PMID:23760550

  15. Real-time soil compaction monitoring through pad strain measurements: modeling to inform strain gage placement

    NASA Astrophysics Data System (ADS)

    Kimmel, Shawn C.; Mooney, Michael A.

    2011-04-01

    Soil compaction monitoring is critical to earthwork projects, including roadways, earth dams, and levees. Current methods require a halt of production, and provide at best sparse coverage. A system is proposed for static pad foot soil compaction to provide real-time feedback at higher spatial resolutions through machine integrated sensors. The system is composed of pad sensors that measure total normal force and contact stress distribution (CSD), laser sensors that measure soil deflection, and GPS to spatially reference measurements. By combining these measurements, soil stiffness and potentially modulus can be determined. This paper discusses the development of the force and CSD sensing pad. The concept is to instrument individual pads with strain gages to determine loading conditions. Modeling is used to inform strain gage positioning through pad strain behavior analysis of different simulated soil conditions. The finite element analysis (FEA) of a Caterpillar pad is discussed, including formulation and rationale for the various model parameters. The loading parameters are explained, including the range of force magnitudes experienced throughout compaction and the CSD elicited by various soils. The results of this analysis are presented, and show that pad strain is sensitive to both force magnitude and CSD. Specific strain trends are identified in the sidewall and bottom face of the pad which are particularly sensitive to the loading variables. Strain gage placements are proposed that capture the identified trends, thereby providing definitive information on total normal force and CSD.

  16. Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird.

    PubMed

    Shepard, Emily L C; Lambertucci, Sergio A; Vallmitjana, Diego; Wilson, Rory P

    2011-01-01

    Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude) followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus). Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered.

  17. Energy Beyond Food: Foraging Theory Informs Time Spent in Thermals by a Large Soaring Bird

    PubMed Central

    Shepard, Emily L. C.; Lambertucci, Sergio A.; Wilson, Rory P.

    2011-01-01

    Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude) followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus). Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered. PMID:22087301

  18. Energy beyond food: foraging theory informs time spent in thermals by a large soaring bird.

    PubMed

    Shepard, Emily L C; Lambertucci, Sergio A; Vallmitjana, Diego; Wilson, Rory P

    2011-01-01

    Current understanding of how animals search for and exploit food resources is based on microeconomic models. Although widely used to examine feeding, such constructs should inform other energy-harvesting situations where theoretical assumptions are met. In fact, some animals extract non-food forms of energy from the environment, such as birds that soar in updraughts. This study examined whether the gains in potential energy (altitude) followed efficiency-maximising predictions in the world's heaviest soaring bird, the Andean condor (Vultur gryphus). Animal-attached technology was used to record condor flight paths in three-dimensions. Tracks showed that time spent in patchy thermals was broadly consistent with a strategy to maximise the rate of potential energy gain. However, the rate of climb just prior to leaving a thermal increased with thermal strength and exit altitude. This suggests higher rates of energetic gain may not be advantageous where the resulting gain in altitude would lead to a reduction in the ability to search the ground for food. Consequently, soaring behaviour appeared to be modulated by the need to reconcile differing potential energy and food energy distributions. We suggest that foraging constructs may provide insight into the exploitation of non-food energy forms, and that non-food energy distributions may be more important in informing patterns of movement and residency over a range of scales than previously considered. PMID:22087301

  19. Display Provides Pilots with Real-Time Sonic-Boom Information

    NASA Technical Reports Server (NTRS)

    Haering, Ed; Plotkin, Ken

    2013-01-01

    Supersonic aircraft generate shock waves that move outward and extend to the ground. As a cone of pressurized air spreads across the landscape along the flight path, it creates a continuous sonic boom along the flight track. Several factors can influence sonic booms: weight, size, and shape of the aircraft; its altitude and flight path; and weather and atmospheric conditions. This technology allows pilots to control the impact of sonic booms. A software system displays the location and intensity of shock waves caused by supersonic aircraft. This technology can be integrated into cockpits or flight control rooms to help pilots minimize sonic boom impact in populated areas. The system processes vehicle and flight parameters as well as data regarding current atmospheric conditions. The display provides real-time information regarding sonic boom location and intensity, enabling pilots to make the necessary flight adjustments to control the timing and location of sonic booms. This technology can be used on current-generation supersonic aircraft, which generate loud sonic booms, as well as future- generation, low-boom aircraft, anticipated to be quiet enough for populated areas.

  20. Shannon information entropy for assessing space-time variability of rainfall and streamflow in semiarid region.

    PubMed

    Rodrigues da Silva, Vicente de P; Belo Filho, Adelgcio F; Rodrigues Almeida, Rafaela S; de Holanda, Romildo Morant; da Cunha Campos, João Hugo Baracuy

    2016-02-15

    The principle of maximum entropy can provide consistent basis to analyze water resources and geophysical processes in general. In this paper, we propose to assess the space-time variability of rainfall and streamflow in northeastern region of Brazil using the Shannon entropy. Mean values of marginal and relative entropies were computed for a 10-year period from 189 stations in the study area and entropy maps were then constructed for delineating annual and seasonal characteristics of rainfall and streamflow. The Mann-Kendall test was used to evaluate the long-term trend in marginal entropy as well as relative entropy for two sample stations. High degree of similarity was found between rainfall and streamflow, particularly during dry season. Both rainfall and streamflow variability can satisfactorily be obtained in terms of marginal entropy as a comprehensive measure of the regional uncertainty of these hydrological events. The Shannon entropy produced spatial patterns which led to a better understanding of rainfall and streamflow characteristics throughout the northeastern region of Brazil. The total relative entropy indicated that rainfall and streamflow carried the same information content at annual and rainy season time scales. PMID:26657379

  1. Evacuation time estimate for total pedestrian evacuation using a queuing network model and volunteered geographic information

    NASA Astrophysics Data System (ADS)

    Kunwar, Bharat; Simini, Filippo; Johansson, Anders

    2016-02-01

    Estimating city evacuation time is a nontrivial problem due to the interaction between thousands of individual agents, giving rise to various collective phenomena, such as bottleneck formation, intermittent flow, and stop-and-go waves. We present a mean field approach to draw relationships between road network spatial attributes, the number of evacuees, and the resultant evacuation time estimate (ETE). Using volunteered geographic information, we divide 50 United Kingdom cities into a total of 704 catchment areas (CAs) which we define as an area where all agents share the same nearest exit node. 90% of the agents are within ≈6 ,847 m of CA exit nodes with ≈13 ,778 agents/CA. We establish a characteristic flow rate from catchment area attributes (population, distance to exit node, and exit node width) and a mean flow rate in a free-flow regime by simulating total evacuations using an agent based "queuing network" model. We use these variables to determine a relationship between catchment area attributes and resultant ETEs. This relationship could enable emergency planners to make a rapid appraisal of evacuation strategies and help support decisions in the run up to a crisis.

  2. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  3. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  4. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  5. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  6. 30 CFR 48.3 - Training plans; time of submission; where filed; information required; time for approval; method...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... subject to evaluation by the District Manager to determine the effectiveness of the training programs. If... submit a revised program of instruction for 30 CFR 75.1502, shall also submit a revised training plan... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Training plans; time of submission; where...

  7. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  8. Accurate calculation of the absolute free energy of binding for drug molecules† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c5sc02678d Click here for additional data file.

    PubMed Central

    Aldeghi, Matteo; Heifetz, Alexander; Bodkin, Michael J.; Knapp, Stefan

    2016-01-01

    Accurate prediction of binding affinities has been a central goal of computational chemistry for decades, yet remains elusive. Despite good progress, the required accuracy for use in a drug-discovery context has not been consistently achieved for drug-like molecules. Here, we perform absolute free energy calculations based on a thermodynamic cycle for a set of diverse inhibitors binding to bromodomain-containing protein 4 (BRD4) and demonstrate that a mean absolute error of 0.6 kcal mol–1 can be achieved. We also show a similar level of accuracy (1.0 kcal mol–1) can be achieved in pseudo prospective approach. Bromodomains are epigenetic mark readers that recognize acetylation motifs and regulate gene transcription, and are currently being investigated as therapeutic targets for cancer and inflammation. The unprecedented accuracy offers the exciting prospect that the binding free energy of drug-like compounds can be predicted for pharmacologically relevant targets. PMID:26798447

  9. Barriers Over Time to Full Implementation of Health Information Exchange in the United States

    PubMed Central

    2014-01-01

    Background Although health information exchanges (HIE) have existed since their introduction by President Bush in his 2004 State of the Union Address, and despite monetary incentives earmarked in 2009 by the health information technology for economic and clinical health (HITECH) Act, adoption of HIE has been sparse in the United States. Research has been conducted to explore the concept of HIE and its benefit to patients, but viable business plans for their existence are rare, and so far, no research has been conducted on the dynamic nature of barriers over time. Objective The aim of this study is to map the barriers mentioned in the literature to illustrate the effect, if any, of barriers discussed with respect to the HITECH Act from 2009 to the early months of 2014. Methods We conducted a systematic literature review from CINAHL, PubMed, and Google Scholar. The search criteria primarily focused on studies. Each article was read by at least two of the authors, and a final set was established for evaluation (n=28). Results The 28 articles identified 16 barriers. Cost and efficiency/workflow were identified 15% and 13% of all instances of barriers mentioned in literature, respectively. The years 2010 and 2011 were the most plentiful years when barriers were discussed, with 75% and 69% of all barriers listed, respectively. Conclusions The frequency of barriers mentioned in literature demonstrates the mindfulness of users, developers, and both local and national government. The broad conclusion is that public policy masks the effects of some barriers, while revealing others. However, a deleterious effect can be inferred when the public funds are exhausted. Public policy will need to lever incentives to overcome many of the barriers such as cost and impediments to competition. Process improvement managers need to optimize the efficiency of current practices at the point of care. Developers will need to work with users to ensure tools that use HIE resources work into

  10. The Online GVP/USGS Weekly Volcanic Activity Report: Providing Timely Information About Worldwide Volcanism

    NASA Astrophysics Data System (ADS)

    Mayberry, G. C.; Guffanti, M. C.; Luhr, J. F.; Venzke, E. A.; Wunderman, R. L.

    2001-12-01

    The awesome power and intricate inner workings of volcanoes have made them a popular subject with scientists and the general public alike. About 1500 known volcanoes have been active on Earth during the Holocene, approximately 50 of which erupt per year. With so much activity occurring around the world, often in remote locations, it can be difficult to find up-to-date information about current volcanism from a reliable source. To satisfy the desire for timely volcano-related information the Smithsonian Institution and US Geological Survey combined their strengths to create the Weekly Volcanic Activity Report. The Smithsonian's Global Volcanism Program (GVP) has developed a network of correspondents while reporting worldwide volcanism for over 30 years in their monthly Bulletin of the Global Volcanism Network. The US Geological Survey's Volcano Hazards Program studies and monitors volcanoes in the United States and responds (upon invitation) to selected volcanic crises in other countries. The Weekly Volcanic Activity Report is one of the most popular sites on both organization's websites. The core of the Weekly Volcanic Activity Report is the brief summaries of current volcanic activity around the world. In addition to discussing various types of volcanism, the summaries also describe precursory activity (e.g. volcanic seismicity, deformation, and gas emissions), secondary activity (e.g. debris flows, mass wasting, and rockfalls), volcanic ash hazards to aviation, and preventative measures. The summaries are supplemented by links to definitions of technical terms found in the USGS photoglossary of volcano terms, links to information sources, and background information about reported volcanoes. The site also includes maps that highlight the location of reported volcanoes, an archive of weekly reports sorted by volcano and date, and links to commonly used acronyms. Since the Weekly Volcanic Activity Report's inception in November 2000, activity has been reported at

  11. Lateralization of High-Frequency Clicks Based on Interaural Time: Additivity of Information across Frequency

    NASA Astrophysics Data System (ADS)

    Wenzel, Elizabeth Marie

    Lateralization performance based on interaural differences of time (IDTs) was measured for trains of Gaussian clicks which varied in spectral content. In the first experiment, thresholds ((DELTA)IDTs) were measured as a function of the number of clicks in the train (n = 1 to 32), the interclick interval (ICI = 2.5 or 5 ms), and the spectral content (1 vs. 2 or 4 carriers). Subjects' performance was compared to perfect statistical summation which predicts slopes of -.50 when log-(DELTA)IDT vs. long -n is plotted. The results showed that increasing the spectral content of the clicks decreased the intercepts of the log -log functions (decreased thresholds) while having little effect on their slopes. Shortening the ICIs caused the slopes of the functions to decrease in absolute value. To estimate the bandwidth of frequency-interaction in lateralization, d's were measured for clicks with constant IDTs (n = 1) with a fixed carrier (FF = 4000, 5200, 6000 or 7200 Hz), both alone and combined with a second click whose carrier (F) varied from 3500 to 8500 Hz. Performance in combined conditions was compared to independent summation of the information carried by the two frequency-bands. Performance improved as the separation between F and FF increased until the level predicted by independence was reached. The final experiment investigated the interaction of frequency content with IDT. d's were measured as a function of the IDT in clicks with carriers of 5200, 6000 or 7200 Hz, both alone and combined with a 4000-Hz click with a fixed IDT. Performance in combined conditions was again compared to independent additivity. The improvement with frequency was explained by an increase in the number of samples of the IDT reaching the binaural centers due to spread of excitation along the basilar membrane. Less than independent summation was explained by correlation between overlapping bands which reduced the amount of information exciting independent channels. The data also suggest that

  12. NDU Knowledge Net: A Web-Enabled Just-In-Time Information Service for Continuing Education.

    ERIC Educational Resources Information Center

    Alden, Jay

    This paper describes the development of a web-enabled information service for constituents of the Information Resources Management College (National Defense University, Washington, DC). The constituents of the College, who include graduates, current students, and prospective students, typically work in the Chief Information Officer (CIO) office of…

  13. Classification of Physical Activity: Information to Artificial Pancreas Control Systems in Real Time.

    PubMed

    Turksoy, Kamuran; Paulino, Thiago Marques Luz; Zaharieva, Dessi P; Yavelberg, Loren; Jamnik, Veronica; Riddell, Michael C; Cinar, Ali

    2015-11-01

    Physical activity has a wide range of effects on glucose concentrations in type 1 diabetes (T1D) depending on the type (ie, aerobic, anaerobic, mixed) and duration of activity performed. This variability in glucose responses to physical activity makes the development of artificial pancreas (AP) systems challenging. Automatic detection of exercise type and intensity, and its classification as aerobic or anaerobic would provide valuable information to AP control algorithms. This can be achieved by using a multivariable AP approach where biometric variables are measured and reported to the AP at high frequency. We developed a classification system that identifies, in real time, the exercise intensity and its reliance on aerobic or anaerobic metabolism and tested this approach using clinical data collected from 5 persons with T1D and 3 individuals without T1D in a controlled laboratory setting using a variety of common types of physical activity. The classifier had an average sensitivity of 98.7% for physiological data collected over a range of exercise modalities and intensities in these subjects. The classifier will be added as a new module to the integrated multivariable adaptive AP system to enable the detection of aerobic and anaerobic exercise for enhancing the accuracy of insulin infusion strategies during and after exercise.

  14. Temporal weighting of binaural information at low frequencies: Discrimination of dynamic interaural time and level differences.

    PubMed

    Diedesch, Anna C; Stecker, G Christopher

    2015-07-01

    The importance of sound onsets in binaural hearing has been addressed in many studies, particularly at high frequencies, where the onset of the envelope may carry much of the useful binaural information. Some studies suggest that sound onsets might play a similar role in the processing of binaural cues [e.g., fine-structure interaural time differences (ITD)] at low frequencies. This study measured listeners' sensitivity to ITD and interaural level differences (ILD) present in early (i.e., onset) and late parts of 80-ms pure tones of 250-, 500-, and 1000-Hz frequency. Following previous studies, tones carried static interaural cues or dynamic cues that peaked at sound onset and diminished to zero at sound offset or vice versa. Although better thresholds were observed in static than dynamic conditions overall, ITD discrimination was especially impaired, regardless of frequency, when cues were not available at sound onset. Results for ILD followed a similar pattern at 1000 Hz; at lower frequencies, ILD thresholds did not differ significantly between dynamic-cue conditions. The results support the "onset" hypothesis of Houtgast and Plomp [(1968). J. Acoust. Soc. Am. 44, 807-812] for ITD discrimination, but not necessarily ILD discrimination, in low-frequency pure tones.

  15. Rollout and Turnoff (ROTO) Guidance and Information Displays: Effect on Runway Occupancy Time in Simulated Low-Visibility Landings

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith

    2001-01-01

    This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.

  16. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  17. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  18. The JPL Tropical Cyclone Information System: Methods for Creating Near Real-Time Science Data Portals

    NASA Astrophysics Data System (ADS)

    Knosp, B.; Li, P.; Vu, Q.; Hristova-Veleva, S. M.; Turk, F. J.; Shen, T.; Poulsen, W. L.; Lambrigtsen, B.

    2013-12-01

    The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data related to tropical cyclones. The JPL TCIS was made public in 2008 and initially served as a data and plot archive for past storms. More recently, the TCIS has expanded its functionality to provide near real-time (NRT) data portals for specific science field campaigns, such as the Genesis and Rapid Intensification Processes (GRIP) campaign in 2010 and ongoing Hurricane and Severe Storm Sentinel (HS3) campaign. These NRT portals allow campaign team members to look at current conditions in the geographical domain of interest. Creating the NRT portals has been particularly challenging due to (1) the wide breadth of data that needs to be collected, (2) the number of data product plots that need to be served to the user, (3) the mechanics of the search and discovery tools, and (4) the issue of how to display multiple data plots at once in a meaningful way. Recently, the TCIS team has been working to redevelop the NRT portals with these challenges in mind. The new architecture we created allows for configurable mission portals that can be created on the fly. In addition to a new database that handles portal configuration, these updated NRT portals also support an improved navigation method that allows users to see what data is available, as well as a resizable visualization area based on the users' client. The integration of the NRT portal with the NASA Earth Observing System Simulators Suite (NEOS3) and a set of new online data analysis tools allows users to compare the observation and model outputs directly and perform statistical analysis with multiple datasets. In this poster, we will present the methods and practices we used to create configurable portals, gather and plot science data with low latencies, design a navigation scheme that supports multiple

  19. 18 CFR 701.206 - Time limit for WRC final determinations regarding requests for information appealed by the...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Time limit for WRC final.... 701.206 Section 701.206 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information § 701.206 Time limit for WRC final determinations regarding...

  20. 18 CFR 701.206 - Time limit for WRC final determinations regarding requests for information appealed by the...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Time limit for WRC... adverse determination. 701.206 Section 701.206 Conservation of Power and Water Resources WATER RESOURCES COUNCIL COUNCIL ORGANIZATION Availability of Information § 701.206 Time limit for WRC final...