Sample records for accurate timing improves

  1. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  2. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    DTIC Science & Technology

    2005-09-01

    develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic

  3. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning

    PubMed Central

    Silva, Susana F.; Domingues, José Paulo

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed. PMID:29599938

  4. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    PubMed

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  5. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  6. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  7. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  8. Influence of accurate and inaccurate 'split-time' feedback upon 10-mile time trial cycling performance.

    PubMed

    Wilson, Mathew G; Lane, Andy M; Beedie, Chris J; Farooq, Abdulaziz

    2012-01-01

    The objective of the study is to examine the impact of accurate and inaccurate 'split-time' feedback upon a 10-mile time trial (TT) performance and to quantify power output into a practically meaningful unit of variation. Seven well-trained cyclists completed four randomised bouts of a 10-mile TT on a SRM™ cycle ergometer. TTs were performed with (1) accurate performance feedback, (2) without performance feedback, (3) and (4) false negative and false positive 'split-time' feedback showing performance 5% slower or 5% faster than actual performance. There were no significant differences in completion time, average power output, heart rate or blood lactate between the four feedback conditions. There were significantly lower (p < 0.001) average [Formula: see text] (ml min(-1)) and [Formula: see text] (l min(-1)) scores in the false positive (3,485 ± 596; 119 ± 33) and accurate (3,471 ± 513; 117 ± 22) feedback conditions compared to the false negative (3,753 ± 410; 127 ± 27) and blind (3,772 ± 378; 124 ± 21) feedback conditions. Cyclists spent a greater amount of time in a '20 watt zone' 10 W either side of average power in the negative feedback condition (fastest) than the accurate feedback (slowest) condition (39.3 vs. 32.2%, p < 0.05). There were no significant differences in the 10-mile TT performance time between accurate and inaccurate feedback conditions, despite significantly lower average [Formula: see text] and [Formula: see text] scores in the false positive and accurate feedback conditions. Additionally, cycling with a small variation in power output (10 W either side of average power) produced the fastest TT. Further psycho-physiological research should examine the mechanism(s) why lower [Formula: see text] and [Formula: see text] scores are observed when cycling in a false positive or accurate feedback condition compared to a false negative or blind feedback condition.

  9. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  10. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  11. Accurate Sample Time Reconstruction of Inertial FIFO Data.

    PubMed

    Stieber, Sebastian; Dorsch, Rainer; Haubelt, Christian

    2017-12-13

    In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO) interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts-introduced by fabrication inaccuracies, temperature changes and wear-out effects-onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS) technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  12. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  14. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  15. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  16. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  17. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  18. Helicopter flight dynamics simulation with a time-accurate free-vortex wake model

    NASA Astrophysics Data System (ADS)

    Ribera, Maria

    This dissertation describes the implementation and validation of a coupled rotor-fuselage simulation model with a time-accurate free-vortex wake model capable of capturing the response to maneuvers of arbitrary amplitude. The resulting model has been used to analyze different flight conditions, including both steady and transient maneuvers. The flight dynamics model is based on a system of coupled nonlinear rotor-fuselage differential equations in first-order, state-space form. The rotor model includes flexible blades, with coupled flap-lag-torsion dynamics and swept tips; the rigid body dynamics are modeled with the non-linear Euler equations. The free wake models the rotor flow field by tracking the vortices released at the blade tips. Their behavior is described by the equations of vorticity transport, which is approximated using finite differences, and solved using a time-accurate numerical scheme. The flight dynamics model can be solved as a system of non-linear algebraic trim equations to determine the steady state solution, or integrated in time in response to pilot-applied controls. This study also implements new approaches to reduce the prohibitive computational costs associated with such complex models without losing accuracy. The mathematical model was validated for trim conditions in level flight, turns, climbs and descents. The results obtained correlate well with flight test data, both in level flight as well as turning and climbing and descending flight. The swept tip model was also found to improve the trim predictions, particularly at high speed. The behavior of the rigid body and the rotor blade dynamics were also studied and related to the aerodynamic load distributions obtained with the free wake induced velocities. The model was also validated in a lateral maneuver from hover. The results show improvements in the on-axis prediction, and indicate a possible relation between the off-axis prediction and the lack of rotor-body interaction

  19. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  20. Accurate seismic phase identification and arrival time picking of glacial icequakes

    NASA Astrophysics Data System (ADS)

    Jones, G. A.; Doyle, S. H.; Dow, C.; Kulessa, B.; Hubbard, A.

    2010-12-01

    A catastrophic lake drainage event was monitored continuously using an array of 6, 4.5 Hz 3 component geophones in the Russell Glacier catchment, Western Greenland. Many thousands of events and arrival time phases (e.g., P- or S-wave) were recorded, often with events occurring simultaneously but at different locations. In addition, different styles of seismic events were identified from 'classical' tectonic earthquakes to tremors usually observed in volcanic regions. The presence of such a diverse and large dataset provides insight into the complex system of lake drainage. One of the most fundamental steps in seismology is the accurate identification of a seismic event and its associated arrival times. However, the collection of such a large and complex dataset makes the manual identification of a seismic event and picking of the arrival time phases time consuming with variable results. To overcome the issues of consistency and manpower, a number of different methods have been developed including short-term and long-term averages, spectrograms, wavelets, polarisation analyses, higher order statistics and auto-regressive techniques. Here we propose an automated procedure which establishes the phase type and accurately determines the arrival times. The procedure combines a number of different automated methods to achieve this, and is applied to the recently acquired lake drainage data. Accurate identification of events and their arrival time phases are the first steps in gaining a greater understanding of the extent of the deformation and the mechanism of such drainage events. A good knowledge of the propagation pathway of lake drainage meltwater through a glacier will have significant consequences for interpretation of glacial and ice sheet dynamics.

  1. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  2. Improving Efficiency Using Time-Driven Activity-Based Costing Methodology.

    PubMed

    Tibor, Laura C; Schultz, Stacy R; Menaker, Ronald; Weber, Bradley D; Ness, Jay; Smith, Paula; Young, Phillip M

    2017-03-01

    The aim of this study was to increase efficiency in MR enterography using a time-driven activity-based costing methodology. In February 2015, a multidisciplinary team was formed to identify the personnel, equipment, space, and supply costs of providing outpatient MR enterography. The team mapped the current state, completed observations, performed timings, and calculated costs associated with each element of the process. The team used Pareto charts to understand the highest cost and most time-consuming activities, brainstormed opportunities, and assessed impact. Plan-do-study-act cycles were developed to test the changes, and run charts were used to monitor progress. The process changes consisted of revising the workflow associated with the preparation and administration of glucagon, with completed implementation in November 2015. The time-driven activity-based costing methodology allowed the radiology department to develop a process to more accurately identify the costs of providing MR enterography. The primary process modification was reassigning responsibility for the administration of glucagon from nurses to technologists. After implementation, the improvements demonstrated success by reducing non-value-added steps and cost by 13%, staff time by 16%, and patient process time by 17%. The saved process time was used to augment existing examination time slots to more accurately accommodate the entire enterographic examination. Anecdotal comments were captured to validate improved staff satisfaction within the multidisciplinary team. This process provided a successful outcome to address daily workflow frustrations that could not previously be improved. A multidisciplinary team was necessary to achieve success, in addition to the use of a structured problem-solving approach. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. Accurate and efficient calculation of response times for groundwater flow

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  4. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  5. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    PubMed

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  6. Towards an accurate real-time locator of infrasonic sources

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  7. Improved Real-Time Monitoring Using Multiple Expert Systems

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Angelino, Robert; Quan, Alan G.; Veregge, John; Childs, Cynthia

    1993-01-01

    Monitor/Analyzer of Real-Time Voyager Engineering Link (MARVEL) computer program implements combination of techniques of both conventional automation and artificial intelligence to improve monitoring of complicated engineering system. Designed to support ground-based operations of Voyager spacecraft, also adapted to other systems. Enables more-accurate monitoring and analysis of telemetry, enhances productivity of monitoring personnel, reduces required number of such personnel by performing routine monitoring tasks, and helps ensure consistency in face of turnover of personnel. Programmed in C language and includes commercial expert-system software shell also written in C.

  8. Accurate frequency and time dissemination in the optical domain

    NASA Astrophysics Data System (ADS)

    Khabarova, K. Yu; Kalganova, E. S.; Kolachevsky, N. N.

    2018-02-01

    The development of the optical frequency comb technique has enabled a wide use of atomic optical clocks by allowing frequency conversion from the optical to the radio frequency range. Today, the fractional instability of such clocks has reached the record eighteen-digit level, two orders of magnitude better than for cesium fountains representing the primary frequency standard. This is paralleled by the development of techniques for transferring accurate time and optical frequency signals, including fiber links. With this technology, the fractional instability of transferred frequency can be lowered to below 10‑18 with an averaging time of 1000 s for a 1000 km optical link. At a distance of 500 km, a time signal uncertainty of 250 ps has been achieved. Optical links allow comparing optical clocks and creating a synchronized time and frequency standard network at a new level of precision. Prospects for solving new problems arise, including the determination of the gravitational potential, the measurement of the continental Sagnac effect, and precise tests of fundamental theories.

  9. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  10. Remote magnetic navigation for accurate, real-time catheter positioning and ablation in cardiac electrophysiology procedures.

    PubMed

    Filgueiras-Rama, David; Estrada, Alejandro; Shachar, Josh; Castrejón, Sergio; Doiny, David; Ortega, Marta; Gang, Eli; Merino, José L

    2013-04-21

    New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate.

  11. Remote Magnetic Navigation for Accurate, Real-time Catheter Positioning and Ablation in Cardiac Electrophysiology Procedures

    PubMed Central

    Filgueiras-Rama, David; Estrada, Alejandro; Shachar, Josh; Castrejón, Sergio; Doiny, David; Ortega, Marta; Gang, Eli; Merino, José L.

    2013-01-01

    New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate. PMID:23628883

  12. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  13. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  14. Temporal binning of time-correlated single photon counting data improves exponential decay fits and imaging speed

    PubMed Central

    Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.

    2016-01-01

    Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663

  15. Capsule-odometer: a concept to improve accurate lesion localisation.

    PubMed

    Karargyris, Alexandros; Koulaouzidis, Anastasios

    2013-09-21

    In order to improve lesion localisation in small-bowel capsule endoscopy, a modified capsule design has been proposed incorporating localisation and - in theory - stabilization capabilities. The proposed design consists of a capsule fitted with protruding wheels attached to a spring-mechanism. This would act as a miniature odometer, leading to more accurate lesion localization information in relation to the onset of the investigation (spring expansion e.g., pyloric opening). Furthermore, this capsule could allow stabilization of the recorded video as any erratic, non-forward movement through the gut is minimised. Three-dimensional (3-D) printing technology was used to build a capsule prototype. Thereafter, miniature wheels were also 3-D printed and mounted on a spring which was attached to conventional capsule endoscopes for the purpose of this proof-of-concept experiment. In vitro and ex vivo experiments with porcine small-bowel are presented herein. Further experiments have been scheduled.

  16. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  17. Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1991-01-01

    Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.

  18. Improved management of radiotherapy departments through accurate cost data.

    PubMed

    Kesteloot, K; Lievens, Y; van der Schueren, E

    2000-06-01

    Escalating health care expenses urge governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (+/-4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  19. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  20. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  1. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Roehl, U.; Westerhold, T.; Laskar, J.

    2012-12-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  2. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Westerhold, Thomas; RöHl, Ursula; Laskar, Jacques

    2012-06-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ˜54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  3. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  4. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher

  5. A time-accurate finite volume method valid at all flow velocities

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1993-01-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil

  6. Decision tree for accurate infection timing in individuals newly diagnosed with HIV-1 infection.

    PubMed

    Verhofstede, Chris; Fransen, Katrien; Van Den Heuvel, Annelies; Van Laethem, Kristel; Ruelle, Jean; Vancutsem, Ellen; Stoffels, Karolien; Van den Wijngaert, Sigi; Delforge, Marie-Luce; Vaira, Dolores; Hebberecht, Laura; Schauvliege, Marlies; Mortier, Virginie; Dauwe, Kenny; Callens, Steven

    2017-11-29

    There is today no gold standard method to accurately define the time passed since infection at HIV diagnosis. Infection timing and incidence measurement is however essential to better monitor the dynamics of local epidemics and the effect of prevention initiatives. Three methods for infection timing were evaluated using 237 serial samples from documented seroconversions and 566 cross sectional samples from newly diagnosed patients: identification of antibodies against the HIV p31 protein in INNO-LIA, SediaTM BED CEIA and SediaTM LAg-Avidity EIA. A multi-assay decision tree for infection timing was developed. Clear differences in recency window between BED CEIA, LAg-Avidity EIA and p31 antibody presence were observed with a switch from recent to long term infection a median of 169.5, 108.0 and 64.5 days after collection of the pre-seroconversion sample respectively. BED showed high reliability for identification of long term infections while LAg-Avidity is highly accurate for identification of recent infections. Using BED as initial assay to identify the long term infections and LAg-Avidity as a confirmatory assay for those classified as recent infection by BED, explores the strengths of both while reduces the workload. The short recency window of p31 antibodies allows to discriminate very early from early infections based on this marker. BED recent infection results not confirmed by LAg-Avidity are considered to reflect a period more distant from the infection time. False recency predictions in this group can be minimized by elimination of patients with a CD4 count of less than 100 cells/mm3 or without no p31 antibodies. For 566 cross sectional sample the outcome of the decision tree confirmed the infection timing based on the results of all 3 markers but reduced the overall cost from 13.2 USD to 5.2 USD per sample. A step-wise multi assay decision tree allows accurate timing of the HIV infection at diagnosis at affordable effort and cost and can be an important

  7. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  8. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  9. Time-driven Activity-based Costing More Accurately Reflects Costs in Arthroplasty Surgery.

    PubMed

    Akhavan, Sina; Ward, Lorrayne; Bozic, Kevin J

    2016-01-01

    Cost estimates derived from traditional hospital cost accounting systems have inherent limitations that restrict their usefulness for measuring process and quality improvement. Newer approaches such as time-driven activity-based costing (TDABC) may offer more precise estimates of true cost, but to our knowledge, the differences between this TDABC and more traditional approaches have not been explored systematically in arthroplasty surgery. The purposes of this study were to compare the costs associated with (1) primary total hip arthroplasty (THA); (2) primary total knee arthroplasty (TKA); and (3) three surgeons performing these total joint arthroplasties (TJAs) as measured using TDABC versus traditional hospital accounting (TA). Process maps were developed for each phase of care (preoperative, intraoperative, and postoperative) for patients undergoing primary TJA performed by one of three surgeons at a tertiary care medical center. Personnel costs for each phase of care were measured using TDABC based on fully loaded labor rates, including physician compensation. Costs associated with consumables (including implants) were calculated based on direct purchase price. Total costs for 677 primary TJAs were aggregated over 17 months (January 2012 to May 2013) and organized into cost categories (room and board, implant, operating room services, drugs, supplies, other services). Costs derived using TDABC, based on actual time and intensity of resources used, were compared with costs derived using TA techniques based on activity-based costing and indirect costs calculated as a percentage of direct costs from the hospital decision support system. Substantial differences between cost estimates using TDABC and TA were found for primary THA (USD 12,982 TDABC versus USD 23,915 TA), primary TKA (USD 13,661 TDABC versus USD 24,796 TA), and individually across all three surgeons for both (THA: TDABC = 49%-55% of TA total cost; TKA: TDABC = 53%-55% of TA total cost). Cost

  10. AERONET Version 3 Release: Providing Significant Improvements for Multi-Decadal Global Aerosol Database and Near Real-Time Validation

    NASA Technical Reports Server (NTRS)

    Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason; hide

    2016-01-01

    Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.

  11. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  12. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and

  13. A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Liu, Nan-Suey

    1992-01-01

    A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.

  14. Real-Time and Accurate Identification of Single Oligonucleotide Photoisomers via an Aerolysin Nanopore.

    PubMed

    Hu, Zheng-Li; Li, Zi-Yuan; Ying, Yi-Lun; Zhang, Junji; Cao, Chan; Long, Yi-Tao; Tian, He

    2018-04-03

    Identification of the configuration for the photoresponsive oligonucleotide plays an important role in the ingenious design of DNA nanomolecules and nanodevices. Due to the limited resolution and sensitivity of present methods, it remains a challenge to determine the accurate configuration of photoresponsive oligonucleotides, much less a precise description of their photoconversion process. Here, we used an aerolysin (AeL) nanopore-based confined space for real-time determination and quantification of the absolute cis/ trans configuration of each azobenzene-modified oligonucleotide (Azo-ODN) with a single molecule resolution. The two completely separated current distributions with narrow peak widths at half height (<0.62 pA) are assigned to cis/ trans-Azo-ODN isomers, respectively. Due to the high current sensitivity, each isomer of Azo-ODN could be undoubtedly identified, which gives the accurate photostationary conversion values of 82.7% for trans-to- cis under UV irradiation and 82.5% for cis-to- trans under vis irradiation. Further real-time kinetic evaluation reveals that the photoresponsive rate constants of Azo-ODN from trans-to- cis and cis-to -trans are 0.43 and 0.20 min -1 , respectively. This study will promote the sophisticated design of photoresponsive ODN to achieve an efficient and applicable photocontrollable process.

  15. Renal contrast-enhanced MR angiography: timing errors and accurate depiction of renal artery origins.

    PubMed

    Schmidt, Maria A; Morgan, Robert

    2008-10-01

    To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.

  16. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  17. Development of an Improved Time Varying Loudness Model with the Inclusion of Binaural Loudness Summation

    NASA Astrophysics Data System (ADS)

    Charbonneau, Jeremy

    As the perceived quality of a product is becoming more important in the manufacturing industry, more emphasis is being placed on accurately predicting the sound quality of everyday objects. This study was undertaken to improve upon current prediction techniques with regard to the psychoacoustic descriptor of loudness and an improved binaural summation technique. The feasibility of this project was first investigated through a loudness matching experiment involving thirty-one subjects and pure tones of constant sound pressure level. A dependence of binaural summation on frequency was observed which had previously not been a subject of investigation in the reviewed literature. A follow-up investigation was carried out with forty-eight volunteers and pure tones of constant sensation level. Contrary to existing theories in literature the resulting loudness matches revealed an amplitude versus frequency relationship which confirmed the perceived increase in loudness when a signal was presented to both ears simultaneously as opposed to one ear alone. The resulting trend strongly indicated that the higher the frequency of the presented signal, the greater the increase in observed binaural summation. The results from each investigation were summarized into a single binaural summation algorithm and inserted into an improved time-varying loudness model. Using experimental techniques, it was demonstrated that the updated binaural summation algorithm was a considerable improvement over the state of the art approach for predicting the perceived binaural loudness. The improved function retained the ease of use from the original model while additionally providing accurate estimates of diotic listening conditions from monaural WAV files. It was clearly demonstrated using a validation jury test that the revised time-varying loudness model was a significant improvement over the previously standardized approach.

  18. Improved clinical documentation leads to superior reportable outcomes: An accurate representation of patient's clinical status.

    PubMed

    Elkbuli, Adel; Godelman, Steven; Miller, Ashley; Boneva, Dessy; Bernal, Eileen; Hai, Shaikh; McKenney, Mark

    2018-05-01

    Clinical documentation can be an underappreciated. Trauma Centers (TCs) are now routinely evaluated for quality performance. TCs with poor documentation may not accurately reflect actual injury burden or comorbidities and can impact accuracy of mortality measures. Markers exist to adjust crude death rates for injury severity: observed over expected deaths (O/E) adjust for injury; Case Mix Index (CMI) reflects disease burden, and Severity of Illness (SOI) measures organ dysfunction. We aim to evaluate the impact of implementing a Clinical Documentation Improvement Program (CDIP) on reported outcomes. Review of 2-years of prospectively collected data for trauma patients, during the implementation of CDIP. A two-group prospective observational study design was used to evaluate the pre-implementation and the post-implementation phase of improved clinical documentation. T-test and Chi-Squared were used with significance defined as p < 0.05. In the pre-implementation period, there were 49 deaths out of 1419 (3.45%), while post-implementation period, had 38 deaths out of 1454 (2.61%), (non-significant). There was however, a significant difference between O/E ratios. In the pre-phase, the O/E was 1.36 and 0.70 in the post-phase (p < 0.001). The two groups also differed on CMI with a pre-group mean of 2.48 and a post-group of 2.87 (p < 0.001), indicating higher injury burden in the post-group. SOI started at 2.12 and significantly increased to 2.91, signifying more organ system dysfunction (p < 0.018). Improved clinical documentation results in improved accuracy of measures of mortality, injury severity, and comorbidities and a more accurate reflection in O/E mortality ratios, CMI, and SOI. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  19. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  20. In-Band Asymmetry Compensation for Accurate Time/Phase Transport over Optical Transport Network

    PubMed Central

    Siu, Sammy; Hu, Hsiu-fang; Lin, Shinn-Yan; Liao, Chia-Shu; Lai, Yi-Liang

    2014-01-01

    The demands of precise time/phase synchronization have been increasing recently due to the next generation of telecommunication synchronization. This paper studies the issues that are relevant to distributing accurate time/phase over optical transport network (OTN). Each node and link can introduce asymmetry, which affects the adequate time/phase accuracy over the networks. In order to achieve better accuracy, protocol level full timing support is used (e.g., Telecom-Boundary clock). Due to chromatic dispersion, the use of different wavelengths consequently causes fiber link delay asymmetry. The analytical result indicates that it introduces significant time error (i.e., phase offset) within 0.3397 ns/km in C-band or 0.3943 ns/km in L-band depending on the wavelength spacing. With the proposed scheme in this paper, the fiber link delay asymmetry can be compensated relying on the estimated mean fiber link delay by the Telecom-Boundary clock, while the OTN control plane is responsible for processing the fiber link delay asymmetry to determine the asymmetry compensation in the timing chain. PMID:24982948

  1. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    NASA Astrophysics Data System (ADS)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  2. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models

  3. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  4. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  5. Improvement of Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size.

    PubMed

    Ma, Haotong; Liu, Zejin; Jiang, Pengzhi; Xu, Xiaojun; Du, Shaojun

    2011-07-04

    We propose and demonstrate the improvement of conventional Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size. Based on the detailed study of the refractive beam shaping system, we found that the conventional Galilean beam shaper can only work well for the magnifying beam shaping. Taking the transformation of input beam with Gaussian irradiance distribution into target beam with high order Fermi-Dirac flattop profile as an example, the shaper can only work well at the condition that the size of input and target beam meets R(0) ≥ 1.3 w(0). For the improvement, the shaper is regarded as the combination of magnifying and demagnifying beam shaping system. The surface and phase distributions of the improved Galilean beam shaping system are derived based on Geometric and Fourier Optics. By using the improved Galilean beam shaper, the accurate transformation of input beam with Gaussian irradiance distribution into target beam with flattop irradiance distribution is realized. The irradiance distribution of the output beam is coincident with that of the target beam and the corresponding phase distribution is maintained. The propagation performance of the output beam is greatly improved. Studies of the influences of beam size and beam order on the improved Galilean beam shaping system show that restriction of beam size has been greatly reduced. This improvement can also be used to redistribute the input beam with complicated irradiance distribution into output beam with complicated irradiance distribution.

  6. Improvement of real-time seismic magnitude estimation by combining seismic and geodetic instrumentation

    NASA Astrophysics Data System (ADS)

    Goldberg, D.; Bock, Y.; Melgar, D.

    2017-12-01

    Rapid seismic magnitude assessment is a top priority for earthquake and tsunami early warning systems. For the largest earthquakes, seismic instrumentation tends to underestimate the magnitude, leading to an insufficient early warning, particularly in the case of tsunami evacuation orders. GPS instrumentation provides more accurate magnitude estimations using near-field stations, but isn't sensitive enough to detect the first seismic wave arrivals, thereby limiting solution speed. By optimally combining collocated seismic and GPS instruments, we demonstrate improved solution speed of earthquake magnitude for the largest seismic events. We present a real-time implementation of magnitude-scaling relations that adapts to consider the length of the recording, reflecting the observed evolution of ground motion with time.

  7. PET optimization for improved assessment and accurate quantification of {sup 90}Y-microsphere biodistribution after radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martí-Climent, Josep M., E-mail: jmmartic@unav.es; Prieto, Elena; Elosúa, César

    2014-09-15

    Purpose: {sup 90}Y-microspheres are widely used for the radioembolization of metastatic liver cancer or hepatocellular carcinoma and there is a growing interest for imaging {sup 90}Y-microspheres with PET. The aim of this study is to evaluate the performance of a current generation PET/CT scanner for {sup 90}Y imaging and to optimize the PET protocol to improve the assessment and the quantification of {sup 90}Y-microsphere biodistribution after radioembolization. Methods: Data were acquired on a Biograph mCT-TrueV scanner with time of flight (TOF) and point spread function (PSF) modeling. Spatial resolution was measured with a{sup 90}Y point source. Sensitivity was evaluated usingmore » the NEMA 70 cm line source filled with {sup 90}Y. To evaluate the count rate performance, {sup 90}Y vials with activity ranging from 3.64 to 0.035 GBq were measured in the center of the field of view (CFOV). The energy spectrum was evaluated. Image quality with different reconstructions was studied using the Jaszczak phantom containing six hollow spheres (diameters: 31.3, 28.1, 21.8, 16.1, 13.3, and 10.5 mm), filled with a 207 kBq/ml {sup 90}Y concentration and a 5:1 sphere-to-background ratio. Acquisition time was adjusted to simulate the quality of a realistic clinical PET acquisition of a patient treated with SIR-Spheres{sup ®}. The developed methodology was applied to ten patients after SIR-Spheres{sup ®} treatment acquiring a 10 min per bed PET. Results: The energy spectrum showed the{sup 90}Y bremsstrahlung radiation. The {sup 90}Y transverse resolution, with filtered backprojection reconstruction, was 4.5 mm in the CFOV and degraded to 5.0 mm at 10 cm off-axis. {sup 90}Y absolute sensitivity was 0.40 kcps/MBq in the center of the field of view. Tendency of true and random rates as a function of the {sup 90}Y activity could be accurately described using linear and quadratic models, respectively. Phantom studies demonstrated that, due to low count statistics in {sup 90}Y

  8. Higher-order accurate space-time schemes for computational astrophysics—Part I: finite volume methods

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2017-12-01

    As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. The algorithmic needs of computational astrophysics are indeed very special. The methods need to be robust and preserve the positivity of density and pressure. Relativistic flows should remain sub-luminal. These requirements place additional pressures on a computational astrophysics code, which are usually not felt by a traditional fluid dynamics code. Hence the need for a specialized review. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes. At third order, they are most similar to piecewise parabolic method schemes, which are also included. DG schemes evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and DG schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time with the result that the spatial and temporal accuracies must be matched. This is realized with the help of strong stability preserving Runge-Kutta schemes and ADER (Arbitrary DERivative in space and time) schemes, both of which are also described. The emphasis of this review is on computer-implementable ideas, not necessarily on the underlying theory.

  9. Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Rana, Vijay K.; Rudin, Stephen; Bednarek, Daniel R.

    2014-03-01

    We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.

  10. Mass spectrometry in Earth sciences: the precise and accurate measurement of time.

    PubMed

    Schaltegger, Urs; Wotzlaw, Jörn-Frederik; Ovtcharova, Maria; Chiaradia, Massimo; Spikings, Richard

    2014-01-01

    Precise determinations of the isotopic compositions of a variety of elements is a widely applied tool in Earth sciences. Isotope ratios are used to quantify rates of geological processes that occurred during the previous 4.5 billion years, and also at the present time. An outstanding application is geochronology, which utilizes the production of radiogenic daughter isotopes by the radioactive decay of parent isotopes. Geochronological tools, involving isotopic analysis of selected elements from smallest volumes of minerals by thermal ionization mass spectrometry, provide precise and accurate measurements of time throughout the geological history of our planet over nine orders of magnitude, from the accretion of the proto-planetary disk, to the timing of the last glaciation. This article summarizes the recent efforts of the Isotope Geochemistry, Geochronology and Thermochronology research group at the University of Geneva to advance the U-Pb geochronological tool to achieve unprecedented precision and accuracy, and presents two examples of its application to two significant open questions in Earth sciences: what are the triggers and timescales of volcanic supereruptions, and what were the causes of mass extinctions in the geological past, driven by global climatic and environmental deterioration?

  11. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study.

    PubMed

    Kenngott, Hannes Götz; Preukschas, Anas Amin; Wagner, Martin; Nickel, Felix; Müller, Michael; Bellemann, Nadine; Stock, Christian; Fangerau, Markus; Radeleff, Boris; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter; Maier-Hein, Lena; Müller-Stich, Beat Peter

    2018-06-01

    Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient

  12. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  13. Near real time, accurate, and sensitive microbiological safety monitoring using an all-fibre spectroscopic fluorescence system

    NASA Astrophysics Data System (ADS)

    Vanholsbeeck, F.; Swift, S.; Cheng, M.; Bogomolny, E.

    2013-11-01

    Enumeration of microorganisms is an essential microbiological task for many industrial sectors and research fields. Various tests for detection and counting of microorganisms are used today. However most of the current methods to enumerate bacteria require either long incubation time for limited accuracy, or use complicated protocols along with bulky equipment. We have developed an accurate, all-fibre spectroscopic system to measure fluorescence signal in-situ. In this paper, we examine the potential of this setup for near real time bacteria enumeration in aquatic environment. The concept is based on a well-known phenomenon that the fluorescence quantum yields of some nucleic acid stains significantly increase upon binding with nucleic acids of microorganisms. In addition we have used GFP labeled organisms. The fluorescence signal increase can be correlated to the amount of nucleic acid present in the sample. In addition we have used GFP labeled organisms. Our results show that we are able to detect a wide range of bacteria concentrations without dilution or filtration (1-108 CFU/ml) using different optical probes we designed. This high sensitivity is due to efficient light delivery with an appropriate collection volume and in situ fluorescence detection as well as the use of a sensitive CCD spectrometer. By monitoring the laser power, we can account for laser fluctuations while measuring the fluorescence signal which improves as well the system accuracy. A synchronized laser shutter allows us to achieve a high SNR with minimal integration time, thereby reducing the photobleaching effect. In summary, we conclude that our optical setup may offer a robust method for near real time bacterial detection in aquatic environment.

  14. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    PubMed

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  15. Final priority; technical assistance to improve state data capacity--National Technical Assistance Center to improve state capacity to accurately collect and report IDEA data. Final priority.

    PubMed

    2013-05-20

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority under the Technical Assistance to Improve State Data Capacity program. The Assistant Secretary may use this priority for competitions in fiscal year (FY) 2013 and later years. We take this action to focus attention on an identified national need to provide technical assistance (TA) to States to improve their capacity to meet the data collection and reporting requirements of the Individuals with Disabilities Education Act (IDEA). We intend this priority to establish a TA center to improve State capacity to accurately collect and report IDEA data (Data Center).

  16. Improving productivity through more effective time management.

    PubMed

    Arnold, Edwin; Pulich, Marcia

    2004-01-01

    Effective time management has become increasingly important for managers as they seek to accomplish objectives in today's organizations, which have been restructured for efficiency while employing fewer people. Managers can improve their ability to manage time effectively by examining their attitudes toward time, analyzing time-wasting behaviors, and developing better time management skills. Managers can improve their performance and promotion potential with more effective time utilization. Strategies for improving time management skills are presented.

  17. An improved thin film approximation to accurately determine the optical conductivity of graphene from infrared transmittance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, J. W.; Bol, A. A.; Sanden, M. C. M. van de

    2014-07-07

    This work presents an improved thin film approximation to extract the optical conductivity from infrared transmittance in a simple yet accurate way. This approximation takes into account the incoherent reflections from the backside of the substrate. These reflections are shown to have a significant effect on the extracted optical conductivity and hence on derived parameters as carrier mobility and density. By excluding the backside reflections, the error for these parameters for typical chemical vapor deposited (CVD) graphene on a silicon substrate can be as high as 17% and 45% for the carrier mobility and density, respectively. For the mid- andmore » near-infrared, the approximation can be simplified such that the real part of the optical conductivity is extracted without the need for a parameterization of the optical conductivity. This direct extraction is shown for Fourier transform infrared (FTIR) transmittance measurements of CVD graphene on silicon in the photon energy range of 370–7000 cm{sup −1}. From the real part of the optical conductivity, the carrier density, mobility, and number of graphene layers are determined but also residue, originating from the graphene transfer, is detected. FTIR transmittance analyzed with the improved thin film approximation is shown to be a non-invasive, easy, and accurate measurement and analysis method for assessing the quality of graphene and can be used for other 2-D materials.« less

  18. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Penalties associated with the failure to submit timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES...

  19. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Penalties associated with the failure to submit timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES...

  20. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Penalties associated with the failure to submit timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES Submission of...

  1. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Penalties associated with the failure to submit timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES Submission of...

  2. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Penalties associated with the failure to submit timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) PAYMENT FOR PART B MEDICAL AND OTHER HEALTH SERVICES...

  3. A time-driven activity-based costing model to improve health-care resource use in Mirebalais, Haiti.

    PubMed

    Mandigo, Morgan; O'Neill, Kathleen; Mistry, Bipin; Mundy, Bryan; Millien, Christophe; Nazaire, Yolande; Damuse, Ruth; Pierre, Claire; Mugunga, Jean Claude; Gillies, Rowan; Lucien, Franciscka; Bertrand, Karla; Luo, Eva; Costas, Ainhoa; Greenberg, Sarah L M; Meara, John G; Kaplan, Robert

    2015-04-27

    In resource-limited settings, efficiency is crucial to maximise resources available for patient care. Time driven activity-based costing (TDABC) estimates costs directly from clinical and administrative processes used in patient care, thereby providing valuable information for process improvements. TDABC is more accurate and simpler than traditional activity-based costing because it assigns resource costs to patients based on the amount of time clinical and staff resources are used in patient encounters. Other costing approaches use somewhat arbitrary allocations that provide little transparency into the actual clinical processes used to treat medical conditions. TDABC has been successfully applied in European and US health-care settings to facilitate process improvements and new reimbursement approaches, but it has not been used in resource-limited settings. We aimed to optimise TDABC for use in a resource-limited setting to provide accurate procedure and service costs, reliably predict financing needs, inform quality improvement initiatives, and maximise efficiency. A multidisciplinary team used TDABC to map clinical processes for obstetric care (vaginal and caesarean deliveries, from triage to post-partum discharge) and breast cancer care (diagnosis, chemotherapy, surgery, and support services, such as pharmacy, radiology, laboratory, and counselling) at Hôpital Universitaire de Mirebalais (HUM) in Haiti. The team estimated the direct costs of personnel, equipment, and facilities used in patient care based on the amount of time each of these resources was used. We calculated inpatient personnel costs by allocating provider costs per staffed bed, and assigned indirect costs (administration, facility maintenance and operations, education, procurement and warehouse, bloodbank, and morgue) to various subgroups of the patient population. This study was approved by the Partners in Health/Zanmi Lasante Research Committee. The direct cost of an uncomplicated vaginal

  4. Facilitating process changes in meal delivery and radiological testing to improve inpatient insulin timing using six sigma method.

    PubMed

    Yamamoto, J Jay; Malatestinic, Bill; Lehman, Angela; Juneja, Rattan

    2010-01-01

    The objective of this project was to improve the timing of inpatient insulin administration related to meal delivery and the scheduling of radiology tests by Lean Six Sigma method. A multidisciplinary hospital team and a Six Sigma team from a pharmaceutical manufacturer collaborated to evaluate food delivery and radiology scheduling processes related to the timing of insulin administration. Key factors leading to problems within each system were addressed to improve the efficiency of each process while improving the timeliness of glucose testing and insulin administration. Standardizing the food delivery schedule and utilizing scorecards to track on-time meal deliveries to the floor enabled nursing to more accurately administer insulin in coordination with the delivery of meals. Increasing communication and restricting the scheduling of inpatient procedures during mealtimes reduced disruptions to insulin administration. Data at 6 months postimplementation demonstrated that the institution met goals for most primary outcome metrics including increasing on-time meal delivery and the proportion of patients taking insulin scheduled for radiology tests during appropriate times. By implementing the recommendations identified via Lean Six Sigma, this collaborative effort improved the timing of inpatient insulin administration related to meal delivery and radiology testing.

  5. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics

    PubMed Central

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-01-01

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516

  6. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  7. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    PubMed

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  8. Accurate Drift Time Determination by Traveling Wave Ion Mobility Spectrometry: The Concept of the Diffusion Calibration.

    PubMed

    Kune, Christopher; Far, Johann; De Pauw, Edwin

    2016-12-06

    Ion mobility spectrometry (IMS) is a gas phase separation technique, which relies on differences in collision cross section (CCS) of ions. Ionic clouds of unresolved conformers overlap if the CCS difference is below the instrumental resolution expressed as CCS/ΔCCS. The experimental arrival time distribution (ATD) peak is then a superimposition of the various contributions weighted by their relative intensities. This paper introduces a strategy for accurate drift time determination using traveling wave ion mobility spectrometry (TWIMS) of poorly resolved or unresolved conformers. This method implements through a calibration procedure the link between the peak full width at half-maximum (fwhm) and the drift time of model compounds for wide range of settings for wave heights and velocities. We modified a Gaussian equation, which achieves the deconvolution of ATD peaks where the fwhm is fixed according to our calibration procedure. The new fitting Gaussian equation only depends on two parameters: The apex of the peak (A) and the mean drift time value (μ). The standard deviation parameter (correlated to fwhm) becomes a function of the drift time. This correlation function between μ and fwhm is obtained using the TWIMS calibration procedure which determines the maximum instrumental ion beam diffusion under limited and controlled space charge effect using ionic compounds which are detected as single conformers in the gas phase. This deconvolution process has been used to highlight the presence of poorly resolved conformers of crown ether complexes and peptides leading to more accurate CCS determinations in better agreement with quantum chemistry predictions.

  9. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  10. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    PubMed Central

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-01-01

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. PMID:18502191

  11. Ideas for Future GPS Timing Improvements

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity.

  12. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    PubMed

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  13. Improving data management practices in the Portuguese HIV/AIDS surveillance system during a time of public sector austerity.

    PubMed

    Shivaji, Tara; Cortes Martins, Helena

    2015-01-01

    In a climate of public sector austerity, the demand for accurate information about disease epidemiology rises as health program managers try to align spending to health needs. A policy of case re-notification to improve HIV information quality resulted in a nine-fold increase in the number of case reports received in 2013 by the Portuguese HIV surveillance office. We used value stream mapping to introduce improvements to data processing practices, identify and reduce waste. Two cycles of improvement were trialled. Before intervention, processing time was nine minutes and 28 seconds (95%CI 8:53-10:58) per report. Two months post intervention, it was six minutes and 34 seconds (95% CI 6:25-6:43). One year after the start of the project, processing time was five minutes and 20 seconds (95% CI 1:46-8:52).

  14. Improving data management practices in the Portuguese HIV/AIDS surveillance system during a time of public sector austerity

    PubMed Central

    Shivaji, Tara; Cortes Martins, Helena

    2015-01-01

    In a climate of public sector austerity, the demand for accurate information about disease epidemiology rises as health program managers try to align spending to health needs. A policy of case re-notification to improve HIV information quality resulted in a nine-fold increase in the number of case reports received in 2013 by the Portuguese HIV surveillance office. We used value stream mapping to introduce improvements to data processing practices, identify and reduce waste. Two cycles of improvement were trialled. Before intervention, processing time was nine minutes and 28 seconds (95%CI 8:53–10:58) per report. Two months post intervention, it was six minutes and 34 seconds (95% CI 6:25–6:43). One year after the start of the project, processing time was five minutes and 20 seconds (95% CI 1:46–8:52). PMID:26734448

  15. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  16. Improved method for rapid and accurate isolation and identification of Streptococcus mutans and Streptococcus sobrinus from human plaque samples.

    PubMed

    Villhauer, Alissa L; Lynch, David J; Drake, David R

    2017-08-01

    Mutans streptococci (MS), specifically Streptococcus mutans (SM) and Streptococcus sobrinus (SS), are bacterial species frequently targeted for investigation due to their role in the etiology of dental caries. Differentiation of S. mutans and S. sobrinus is an essential part of exploring the role of these organisms in disease progression and the impact of the presence of either/both on a subject's caries experience. Of vital importance to the study of these organisms is an identification protocol that allows us to distinguish between the two species in an easy, accurate, and timely manner. While conducting a 5-year birth cohort study in a Northern Plains American Indian tribe, the need for a more rapid procedure for isolating and identifying high volumes of MS was recognized. We report here on the development of an accurate and rapid method for MS identification. Accuracy, ease of use, and material and time requirements for morphological differentiation on selective agar, biochemical tests, and various combinations of PCR primers were compared. The final protocol included preliminary identification based on colony morphology followed by PCR confirmation of species identification using primers targeting regions of the glucosyltransferase (gtf) genes of SM and SS. This method of isolation and identification was found to be highly accurate, more rapid than the previous methodology used, and easily learned. It resulted in more efficient use of both time and material resources. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. An Accurate Absorption-Based Net Primary Production Model for the Global Ocean

    NASA Astrophysics Data System (ADS)

    Silsbe, G.; Westberry, T. K.; Behrenfeld, M. J.; Halsey, K.; Milligan, A.

    2016-02-01

    As a vital living link in the global carbon cycle, understanding how net primary production (NPP) varies through space, time, and across climatic oscillations (e.g. ENSO) is a key objective in oceanographic research. The continual improvement of ocean observing satellites and data analytics now present greater opportunities for advanced understanding and characterization of the factors regulating NPP. In particular, the emergence of spectral inversion algorithms now permits accurate retrievals of the phytoplankton absorption coefficient (aΦ) from space. As NPP is the efficiency in which absorbed energy is converted into carbon biomass, aΦ measurements circumvents chlorophyll-based empirical approaches by permitting direct and accurate measurements of phytoplankton energy absorption. It has long been recognized, and perhaps underappreciated, that NPP and phytoplankton growth rates display muted variability when normalized to aΦ rather than chlorophyll. Here we present a novel absorption-based NPP model that parameterizes the underlying physiological mechanisms behind this muted variability, and apply this physiological model to the global ocean. Through a comparison against field data from the Hawaii and Bermuda Ocean Time Series, we demonstrate how this approach yields more accurate NPP measurements than other published NPP models. By normalizing NPP to satellite estimates of phytoplankton carbon biomass, this presentation also explores the seasonality of phytoplankton growth rates across several oceanic regions. Finally, we discuss how future advances in remote-sensing (e.g. hyperspectral satellites, LIDAR, autonomous profilers) can be exploited to further improve absorption-based NPP models.

  18. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  19. Wireless Time Tracking Improves Productivity at CSU Long Beach.

    ERIC Educational Resources Information Center

    Charmack, Scott; Walsh, Randy

    2002-01-01

    Describes California State University Long Beach's implementation of new maintenance management software, which integrated maintenance, inventory control, and key control and allows technicians to enter and receive information through handheld wireless devices for more accurate time accounting. The school estimates a 10 percent increase in…

  20. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  1. Accurate Mars Express orbits to improve the determination of the mass and ephemeris of the Martian moons

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Lainey, V.; Le Maistre, S.; Marty, J. C.; Dehant, V.; Pätzold, M.; Van Hoolst, T.; Häusler, B.

    2008-05-01

    The determination of the ephemeris of the Martian moons has benefited from observations of their plane-of-sky positions derived from images taken by cameras onboard spacecraft orbiting Mars. Images obtained by the Super Resolution Camera (SRC) onboard Mars Express (MEX) have been used to derive moon positions relative to Mars on the basis of a fit of a complete dynamical model of their motion around Mars. Since, these positions are computed from the relative position of the spacecraft when the images are taken, those positions need to be known as accurately as possible. An accurate MEX orbit is obtained by fitting two years of tracking data of the Mars Express Radio Science (MaRS) experiment onboard MEX. The average accuracy of the orbits has been estimated to be around 20-25 m. From these orbits, we have re-derived the positions of Phobos and Deimos at the epoch of the SRC observations and compared them with the positions derived by using the MEX orbits provided by the ESOC navigation team. After fit of the orbital model of Phobos and Deimos, the gain in precision in the Phobos position is roughly 30 m, corresponding to the estimated gain of accuracy of the MEX orbits. A new solution of the GM of the Martian moons has also been obtained from the accurate MEX orbits, which is consistent with previous solutions and, for Phobos, is more precise than the solution from the Mars Global Surveyor (MGS) and Mars Odyssey (ODY) tracking data. It will be further improved with data from MEX-Phobos closer encounters (at a distance less than 300 km). This study also demonstrates the advantage of combining observations of the moon positions from a spacecraft and from the Earth to assess the real accuracy of the spacecraft orbit. In turn, the natural satellite ephemerides can be improved and participate to a better knowledge of the origin and evolution of the Martian moons.

  2. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  3. Investigation of digital timing resolution and further improvement by using constant fraction signal time marker slope for fast scintillator detectors

    NASA Astrophysics Data System (ADS)

    Singh, Kundan; Siwal, Davinder

    2018-04-01

    A digital timing algorithm is explored for fast scintillator detectors, viz. LaBr3, BaF2, and BC501A. Signals were collected with CAEN 250 mega samples per second (MSPS) and 500 MSPS digitizers. The zero crossing time markers (TM) were obtained with a standard digital constant fraction timing (DCF) method. Accurate timing information is obtained using cubic spline interpolation of a DCF transient region sample points. To get the best time-of-flight (TOF) resolution, an optimization of DCF parameters is performed (delay and constant fraction) for each pair of detectors: (BaF2-LaBr3), (BaF2-BC501A), and (LaBr3-BC501A). In addition, the slope information of an interpolated DCF signal is extracted at TM position. This information gives a new insight to understand the broadening in TOF, obtained for a given detector pair. For a pair of signals having small relative slope and interpolation deviations at TM, leads to minimum time broadening. However, the tailing in TOF spectra is dictated by the interplay between the interpolation error and slope variations. Best TOF resolution achieved at the optimum DCF parameters, can be further improved by using slope parameter. Guided by the relative slope parameter, events selection can be imposed which leads to reduction in TOF broadening. While the method sets a trade-off between timing response and coincidence efficiency, it provides an improvement in TOF. With the proposed method, the improved TOF resolution (FWHM) for the aforementioned detector pairs are; 25% (0.69 ns), 40% (0.74 ns), 53% (0.6 ns) respectively, obtained with 250 MSPS, and corresponds to 12% (0.37 ns), 33% (0.72 ns), 35% (0.69 ns) respectively with 500 MSPS digitizers. For the same detector pair, event survival probabilities are; 57%, 58%, 51% respectively with 250 MSPS and becomes 63%, 57%, 68% using 500 MSPS digitizers.

  4. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph; Rojahn, Josh

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state with symmetric boundary conditions and geometries. The trajectory points at issue were in the transonic regime, at 0 and 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC s Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  5. Accurate procedure for deriving UTI at a submilliarcsecond accuracy from Greenwich Sidereal Time or from the stellar angle

    NASA Astrophysics Data System (ADS)

    Capitaine, N.; Gontier, A.-M.

    1993-08-01

    Present observations using modern astrometric techniques are supposed to provide the Earth orientation parameters, and therefore UT1, with an accuracy better than ±1 mas. In practice, UT1 is determined through the intermediary of Greenwich Sidereal Time (GST), using both the conventional relationship between Greenwich Mean Sidereal Time (GMST) and UTl (Aoki et al. 1982) and the so-called "equation of the equinoxes" limited to the first order terms with respect to the nutation quantities. This highly complex relation between sidereal time and UT1 is not accurate at the milliaresecond level which gives rise to spurious terms of milliaresecond amplitude in the derived UTl. A more complete relationship between GST and UT1 has been recommended by Aoki & Kinoshita (1983) and Aoki (1991) taking into account the second order terms in the difference between GST and GM ST, the largest one having an amplitude of 2.64 mas and a 18.6 yr-period. This paper explains how this complete expansion of GST implicitly uses the concept of "nonrotating origin" (NRO) as proposed by Guinot in 1979 and would, therefore, provide a more accurate value of UTl and consequently of the Earth's angular velocity. This paper shows, moreover, that such a procedure would be simplified and conceptually clarified by the explicit use of the NRO as previously proposed (Guinot 1979; Capitaine et al. 1986). The two corresponding options (implicit or explicit use of the NRO) are shown to be equivalent for defining the specific Earth's angle of rotation and then UT1. The of the use of such an accurate procedure which has been proposed in the new IERS standards (McCarthy 1992a) instead of the usual one are estimated for the practical derivation of UT1.

  6. Invited article: Time accurate mass flow measurements of solid-fueled systems.

    PubMed

    Olliges, Jordan D; Lilly, Taylor C; Joslyn, Thomas B; Ketsdever, Andrew D

    2008-10-01

    A novel diagnostic method is described that utilizes a thrust stand mass balance (TSMB) to directly measure time-accurate mass flow from a solid-fuel thruster. The accuracy of the TSMB mass flow measurement technique was demonstrated in three ways including the use of an idealized numerical simulation, verifying a fluid mass calibration with high-speed digital photography, and by measuring mass loss in more than 30 hybrid rocket motor firings. Dynamic response of the mass balance was assessed through weight calibration and used to derive spring, damping, and mass moment of inertia coefficients for the TSMB. These dynamic coefficients were used to determine the mass flow rate and total mass loss within an acrylic and gaseous oxygen hybrid rocket motor firing. Intentional variations in the oxygen flow rate resulted in corresponding variations in the total propellant mass flow as expected. The TSMB was optimized to determine mass losses of up to 2.5 g and measured total mass loss to within 2.5% of that calculated by a NIST-calibrated digital scale. Using this method, a mass flow resolution of 0.0011 g/s or 2% of the average mass flow in this study has been achieved.

  7. Invited Article: Time accurate mass flow measurements of solid-fueled systems

    NASA Astrophysics Data System (ADS)

    Olliges, Jordan D.; Lilly, Taylor C.; Joslyn, Thomas B.; Ketsdever, Andrew D.

    2008-10-01

    A novel diagnostic method is described that utilizes a thrust stand mass balance (TSMB) to directly measure time-accurate mass flow from a solid-fuel thruster. The accuracy of the TSMB mass flow measurement technique was demonstrated in three ways including the use of an idealized numerical simulation, verifying a fluid mass calibration with high-speed digital photography, and by measuring mass loss in more than 30 hybrid rocket motor firings. Dynamic response of the mass balance was assessed through weight calibration and used to derive spring, damping, and mass moment of inertia coefficients for the TSMB. These dynamic coefficients were used to determine the mass flow rate and total mass loss within an acrylic and gaseous oxygen hybrid rocket motor firing. Intentional variations in the oxygen flow rate resulted in corresponding variations in the total propellant mass flow as expected. The TSMB was optimized to determine mass losses of up to 2.5 g and measured total mass loss to within 2.5% of that calculated by a NIST-calibrated digital scale. Using this method, a mass flow resolution of 0.0011 g/s or 2% of the average mass flow in this study has been achieved.

  8. Accurate LC Peak Boundary Detection for 16 O/ 18 O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  9. Picture Memory Improves with Longer On Time and Off Time

    ERIC Educational Resources Information Center

    Tversky, Barbara; Sherman, Tracy

    1975-01-01

    Both recognition and recall of pictures improve as picture presentation time increases and as time between picture increases. This experiment was compared with an earlier one by Shaffer and Shiffrin (1972). (Editor/RK)

  10. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  11. Determination of the presence or absence of sulfur materials in drywall using direct analysis in real time in conjunction with an accurate-mass time-of-flight mass spectrometer.

    PubMed

    Curtis, Matthew E; Jones, Patrick R; Sparkman, O David; Cody, Robert B

    2009-11-01

    Based on the concern about the presence of sulfur materials being in drywall (wallboard), a quick and reliable test to confirm the presence or absence of these materials using direct analysis in real time (DART) mass spectrometry in conjunction with an accurate-mass time-of-flight (TOF) mass spectrometer has been developed and is described here.

  12. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    PubMed

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014

  13. Improving the Depth-Time Fit of Holocene Climate Proxy Measures by Increasing Coherence with a Reference Time-Series

    NASA Astrophysics Data System (ADS)

    Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.

    2007-12-01

    An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies

  14. It's About Time: How Accurate Can Geochronology Become?

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Baldwin, S.; Caffee, M. W.; Gehrels, G. E.; Schoene, B.; Shuster, D. L.; Singer, B. S.

    2015-12-01

    As isotope ratio precisions have improved to as low as ±1 ppm, geochronologic precision has remained essentially unchanged. This largely reflects the nature of radioactivity whereby the parent decays into a different chemical species thus putting as much emphasis on the determining inter-element ratios as isotopic. Even the best current accuracy grows into errors of >0.6 m.y. during the Paleozoic - a span of time equal to ¼ of the Pleistocene. If we are to understand the nature of Paleozoic species variation and climate change at anything like the Cenozoic, we need a 10x improvement in accuracy. The good news is that there is no physical impediment to realizing this. There are enough Pb* atoms in the outer few μm's of a Paleozoic zircon grown moments before eruption to permit ±0.01% accuracy in the U-Pb system. What we need are the resources to synthesize the spikes, enhance ionization yields, exploit microscale sampling, and improve knowledge of λ correspondingly. Despite advances in geochronology over the past 40 years (multicollection, multi-isotope spikes, in situ dating), our ability to translate a daughter atom into a detected ion has remained at the level of 1% or so. This means that a ~102 increase in signal can be achieved before we approach a physical limit. Perhaps the most promising approach is use of broad spectrum lasers that can ionize all neutrals. Radical new approaches to providing mass separation of such signals are emerging, including trapped ion cyclotron resonance and multi-turn, sputtered neutral TOF spectrometers capable of mass resolutions in excess of 105. These innovations hold great promise in geochronology but are largely being developed for cosmochemistry. This may make sense at first glance as cosmochemists are classically atom-limited (IDPs, stardust) but can be a misperception as the outer few μm's of a zircon may represent no more mass than a stardust mote. To reach the fundamental limits of geochronologic signals we need to

  15. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  16. Simultaneous transmission of accurate time, stable frequency, data, and sensor system over one fiber with ITU 100 GHz grid

    NASA Astrophysics Data System (ADS)

    Horvath, Tomas; Munster, Petr; Vojtech, Josef; Velc, Radek; Oujezsky, Vaclav

    2018-01-01

    Optical fiber is the most used medium for current telecommunication networks. Besides data transmissions, special advanced applications like accurate time or stable frequency transmissions are more common, especially in research and education networks. On the other hand, new applications like distributed sensing are in ISP's interest because e.g. such sensing allows new service: protection of fiber infrastructure. Transmission of all applications in a single fiber can be very cost efficient but it is necessary to evaluate possible interaction before real application and deploying the service, especially if standard 100 GHz grid is considered. We performed laboratory measurement of simultaneous transmission of 100 G data based on DP-QPSK modulation format, accurate time, stable frequency and sensing system based on phase sensitive OTDR through two types of optical fibers, G.655 and G.653. These fibers are less common than G.652 fiber but thanks to their slightly higher nonlinear character, there are suitable for simulation of the worst case which can arise in a real network.

  17. Noise-free accurate count of microbial colonies by time-lapse shadow image analysis.

    PubMed

    Ogawa, Hiroyuki; Nasu, Senshi; Takeshige, Motomu; Funabashi, Hisakage; Saito, Mikako; Matsuoka, Hideaki

    2012-12-01

    Microbial colonies in food matrices could be counted accurately by a novel noise-free method based on time-lapse shadow image analysis. An agar plate containing many clusters of microbial colonies and/or meat fragments was trans-illuminated to project their 2-dimensional (2D) shadow images on a color CCD camera. The 2D shadow images of every cluster distributed within a 3-mm thick agar layer were captured in focus simultaneously by means of a multiple focusing system, and were then converted to 3-dimensional (3D) shadow images. By time-lapse analysis of the 3D shadow images, it was determined whether each cluster comprised single or multiple colonies or a meat fragment. The analytical precision was high enough to be able to distinguish a microbial colony from a meat fragment, to recognize an oval image as two colonies contacting each other, and to detect microbial colonies hidden under a food fragment. The detection of hidden colonies is its outstanding performance in comparison with other systems. The present system attained accuracy for counting fewer than 5 colonies and is therefore of practical importance. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement

  19. Real-Time Utilization of STSS for Improved Collision Risk Management

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Fero, R.; Smith, T.; Southworth, J.; Wysack, J.

    2012-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. The increased dependence on our space assets has in turn led to a greater need for accurate, near real-time knowledge of all space activities. Key areas of SSA include improved tracking of smaller objects more frequently, determining the intent of non- corporative maneuvering spacecraft, identifying all potential high risk conjunction events, and leveraging non-traditional sensors in support of the SSA mission. As the size of the space object catalog grows, the demand for more tracking capacity increases. One solution is to exploit existing sensors that are primarily dedicated to other mission areas. This paper presents details regarding the utilization of the Missile Defense Agency's (MDA) space-based asset Space Tracking Surveillance System (STSS) for operational SSA. Shown are the steps and analysis items that were performed to prepare STSS for real-time utilization during high interest conjunction events. Included in our work is: 1. STSS debris tracking capability, 2. Orbit estimation/data fusion between STSS raw observations and JSpOC state data, and finally 3. Orbit geometry for MDA assets 4. Development of the STSS tasking ConOps Several operational examples are included.

  20. Development of improved enzyme-based and lateral flow immunoassays for rapid and accurate serodiagnosis of canine brucellosis.

    PubMed

    Cortina, María E; Novak, Analía; Melli, Luciano J; Elena, Sebastián; Corbera, Natalia; Romero, Juan E; Nicola, Ana M; Ugalde, Juan E; Comerci, Diego J; Ciocchini, Andrés E

    2017-09-01

    Brucellosis is a widespread zoonotic disease caused by Brucella spp. Brucella canis is the etiological agent of canine brucellosis, a disease that can lead to sterility in bitches and dogs causing important economic losses in breeding kennels. Early and accurate diagnosis of canine brucellosis is central to control the disease and lower the risk of transmission to humans. Here, we develop and validate enzyme and lateral flow immunoassays for improved serodiagnosis of canine brucellosis using as antigen the B. canis rough lipopolysaccharide (rLPS). The method used to obtain the rLPS allowed us to produce more homogeneous batches of the antigen that facilitated the standardization of the assays. To validate the assays, 284 serum samples obtained from naturally infected dogs and healthy animals were analyzed. For the B. canis-iELISA and B. canis-LFIA the diagnostic sensitivity was of 98.6%, and the specificity 99.5% and 100%, respectively. We propose the implementation of the B. canis-LFIA as a screening test in combination with the highly accurate laboratory g-iELISA. The B. canis-LFIA is a rapid, accurate and easy to use test, characteristics that make it ideal for the serological surveillance of canine brucellosis in the field or veterinary laboratories. Finally, a blind study including 1040 serum samples obtained from urban dogs showed a prevalence higher than 5% highlighting the need of new diagnostic tools for a more effective control of the disease in dogs and therefore to reduce the risk of transmission of this zoonotic pathogen to humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  2. Improving arrival time identification in transient elastography

    NASA Astrophysics Data System (ADS)

    Klein, Jens; McLaughlin, Joyce; Renzi, Daniel

    2012-04-01

    In this paper, we improve the first step in the arrival time algorithm used for shear wave speed recovery in transient elastography. In transient elastography, a shear wave is initiated at the boundary and the interior displacement of the propagating shear wave is imaged with an ultrasound ultra-fast imaging system. The first step in the arrival time algorithm finds the arrival times of the shear wave by cross correlating displacement time traces (the time history of the displacement at a single point) with a reference time trace located near the shear wave source. The second step finds the shear wave speed from the arrival times. In performing the first step, we observe that the wave pulse decorrelates as it travels through the medium, which leads to inaccurate estimates of the arrival times and ultimately to blurring and artifacts in the shear wave speed image. In particular, wave ‘spreading’ accounts for much of this decorrelation. Here we remove most of the decorrelation by allowing the reference wave pulse to spread during the cross correlation. This dramatically improves the images obtained from arrival time identification. We illustrate the improvement of this method on phantom and in vivo data obtained from the laboratory of Mathias Fink at ESPCI, Paris.

  3. Regional Seismic Travel-Time Prediction, Uncertainty, and Location Improvement in Western Eurasia

    NASA Astrophysics Data System (ADS)

    Flanagan, M. P.; Myers, S. C.

    2004-12-01

    sample WENA1.0 and therefore provide an unbiased assessment of location performance. A statistically significant sample is achieved by generating 500 location realizations based on 5 events with location accuracy between 1 km and 5 km. Each realization is a randomly selected event with location determined by randomly selecting 5 stations from the available network. In 340 cases (68% of the instances), locations are improved, and average mislocation is reduced from 31 km to 26 km. Preliminary test of uncertainty estimates suggest that our uncertainty model produces location uncertainty ellipses that are representative of location accuracy. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-CONF-206386.

  4. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  5. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Rojahn, Josh; Ruf, Joe

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state and in three dimensions with symmetric geometries, no freestream sideslip angle, and motors firing. The trajectory points at issue were in the transonic regime, at 0 and +/- 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC's Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  6. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  7. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  8. Accurate calibration of a molecular beam time-of-flight mass spectrometer for on-line analysis of high molecular weight species.

    PubMed

    Apicella, B; Wang, X; Passaro, M; Ciajolo, A; Russo, C

    2016-10-15

    Time-of-Flight (TOF) Mass Spectrometry is a powerful analytical technique, provided that an accurate calibration by standard molecules in the same m/z range of the analytes is performed. Calibration in a very large m/z range is a difficult task, particularly in studies focusing on the detection of high molecular weight clusters of different molecules or high molecular weight species. External calibration is the most common procedure used for TOF mass spectrometric analysis in the gas phase and, generally, the only available standards are made up of mixtures of noble gases, covering a small mass range for calibration, up to m/z 136 (higher mass isotope of xenon). In this work, an accurate calibration of a Molecular Beam Time-of Flight Mass Spectrometer (MB-TOFMS) is presented, based on the use of water clusters up to m/z 3000. The advantages of calibrating a MB-TOFMS with water clusters for the detection of analytes with masses above those of the traditional calibrants such as noble gases were quantitatively shown by statistical calculations. A comparison of the water cluster and noble gases calibration procedures in attributing the masses to a test mixture extending up to m/z 800 is also reported. In the case of the analysis of combustion products, another important feature of water cluster calibration was shown, that is the possibility of using them as "internal standard" directly formed from the combustion water, under suitable experimental conditions. The water clusters calibration of a MB-TOFMS gives rise to a ten-fold reduction in error compared to the traditional calibration with noble gases. The consequent improvement in mass accuracy in the calibration of a MB-TOFMS has important implications in various fields where detection of high molecular mass species is required. In combustion products analysis, it is also possible to obtain a new calibration spectrum before the acquisition of each spectrum, only modifying some operative conditions. Copyright © 2016

  9. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  10. Improvements in Low-cost Ultrasonic Measurements of Blood Flow in "by-passes" Using Narrow & Broad Band Transit-time Procedures

    NASA Astrophysics Data System (ADS)

    Ramos, A.; Calas, H.; Diez, L.; Moreno, E.; Prohías, J.; Villar, A.; Carrillo, E.; Jiménez, A.; Pereira, W. C. A.; Von Krüger, M. A.

    The cardio-pathology by ischemia is an important cause of death, but the re-vascularization of coronary arteries (by-pass operation) is an useful solution to reduce associated morbidity improving quality of life in patients. During these surgeries, the flow in coronary vessels must be measured, using non-invasive ultrasonic methods, known as transit time flow measurements (TTFM), which are the most accurate option nowadays. TTFM is a common intra-operative tool, in conjunction with classic Doppler velocimetry, to check the quality of these surgery processes for implanting grafts in parallel with the coronary arteries. This work shows important improvements achieved in flow-metering, obtained in our research laboratories (CSIC, ICIMAF, COPPE) and tested under real surgical conditions in Cardiocentro-HHA, for both narrowband NB and broadband BB regimes, by applying results of a CYTED multinational project (Ultrasonic & computational systems for cardiovascular diagnostics). mathematical models and phantoms were created to evaluate accurately flow measurements, in laboratory conditions, before our new electronic designs and low-cost implementations, improving previous ttfm systems, which include analogic detection, acquisition & post-processing, and a portable PC. Both regimes (NB and BB), with complementary performances for different conditions, were considered. Finally, specific software was developed to offer facilities to surgeons in their interventions.

  11. Stability improvement of an operational two-way satellite time and frequency transfer system

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Jiun; Fujieda, Miho; Takiguchi, Hiroshi; Tseng, Wen-Hung; Tsao, Hen-Wai

    2016-04-01

    To keep national time accurately coherent with coordinated universal time, many national metrology institutes (NMIs) use two-way satellite time and frequency transfer (TWSTFT) to continuously measure the time difference with other NMIs over an international baseline. Some NMIs have ultra-stable clocks with stability better than 10-16. However, current operational TWSTFT can only provide frequency uncertainty of 10-15 and time uncertainty of 1 ns, which is inadequate. The uncertainty is dominated by the short-term stability and the diurnals, i.e. the measurement variation with a period of one day. The aim of this work is to improve the stability of operational TWSTFT systems without additional transmission, bandwidth or increase in signal power. A software-defined receiver (SDR) comprising a high-resolution correlator and successive interference cancellation associated with open-loop configuration as the TWSTFT receiver reduces the time deviation from 140 ps to 73 ps at averaging time of 1 h, and occasionally suppresses diurnals. To study the source of the diurnals, TWSTFT is performed using a 2  ×  2 earth station (ES) array. Consequently, some ESs sensitive to temperature variation are identified, and the diurnals are significantly reduced by employing insensitive ESs. Hence, the operational TWSTFT using the proposed SDR with insensitive ESs achieves time deviation to 41 ps at 1 h, and 80 ps for averaging times from 1 h to 20 h.

  12. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  14. Properties of quantum systems via diagonalization of transition amplitudes. II. Systematic improvements of short-time propagation

    NASA Astrophysics Data System (ADS)

    Vidanović, Ivana; Bogojević, Aleksandar; Balaž, Antun; Belić, Aleksandar

    2009-12-01

    In this paper, building on a previous analysis [I. Vidanović, A. Bogojević, and A. Belić, preceding paper, Phys. Rev. E 80, 066705 (2009)] of exact diagonalization of the space-discretized evolution operator for the study of properties of nonrelativistic quantum systems, we present a substantial improvement to this method. We apply recently introduced effective action approach for obtaining short-time expansion of the propagator up to very high orders to calculate matrix elements of space-discretized evolution operator. This improves by many orders of magnitude previously used approximations for discretized matrix elements and allows us to numerically obtain large numbers of accurate energy eigenvalues and eigenstates using numerical diagonalization. We illustrate this approach on several one- and two-dimensional models. The quality of numerically calculated higher-order eigenstates is assessed by comparison with semiclassical cumulative density of states.

  15. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    PubMed

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  16. Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.

    PubMed

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.

  17. Accurate simulation of backscattering spectra in the presence of sharp resonances

    NASA Astrophysics Data System (ADS)

    Barradas, N. P.; Alves, E.; Jeynes, C.; Tosaki, M.

    2006-06-01

    In elastic backscattering spectrometry, the shape of the observed spectrum due to resonances in the nuclear scattering cross-section is influenced by many factors. If the energy spread of the beam before interaction is larger than the resonance width, then a simple convolution with the energy spread on exit and with the detection system resolution will lead to a calculated spectrum with a resonance much sharper than the observed signal. Also, the yield from a thin layer will not be calculated accurately. We have developed an algorithm for the accurate simulation of backscattering spectra in the presence of sharp resonances. Albeit approximate, the algorithm leads to dramatic improvements in the quality and accuracy of the simulations. It is simple to implement and leads to only small increases of the calculation time, being thus suitable for routine data analysis. We show different experimental examples, including samples with roughness and porosity.

  18. Providing accurate near real-time fire alerts for Protected Areas through NASA FIRMS: Opportunities and Challenges

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Wong, M.; Murphy, K. J.

    2013-12-01

    The NASA Fire Information for Resource Management System (FIRMS) is at the forefront of providing global near real-time (NRT) MODIS thermal anomalies / hotspot location data to end-users . FIRMS serves the data via an interactive Web GIS named Web Fire Mapper, downloads of NRT active fire, archive data downloads for MODIS hotspots dating back to 1999 and a hotspot email alert system The FIRMS Email Alerts system has been successfully alerting users of fires in their area of interest in near real-time and/or via daily and weekly email summaries, with an option to receive MODIS hotspot data as a text file (CSV) attachment. Currently, there are more than 7000 email alert subscriptions from more than 100 countries. Specifically, the email alerts system is designed to generate and send an email alert for any region or area on the globe, with a special focus on providing alerts for protected areas worldwide. For many protected areas, email alerts are particularly useful for early fire detection, monitoring on going fires, as well as allocating resources to protect wildlife and natural resources of particular value. For protected areas, FIRMS uses the World Database on Protected Areas (WDPA) supplied by United Nations Environment Program - World Conservation Monitoring Centre (UNEP-WCMC). Maintaining the most up-to-date, accurate boundary geometry for the protected areas for the email alerts is a challenge as the WDPA is continuously updated due to changing boundaries, merging or delisting of certain protected areas. Because of this dynamic nature of the protected areas database, the FIRMS protected areas database is frequently out-of-date with the most current version of WDPA database. To maintain the most up-to-date boundary information for protected areas and to be in compliance with the WDPA terms and conditions, FIRMS needs to constantly update its database of protected areas. Currently, FIRMS strives to keep its database up to date by downloading the most recent

  19. Improving Patient Satisfaction with Waiting Time

    ERIC Educational Resources Information Center

    Eilers, Gayleen M.

    2004-01-01

    Waiting times are a significant component of patient satisfaction. A patient satisfaction survey performed in the author's health center showed that students rated waiting time lowest of the listed categories--A ratings of 58% overall, 63% for scheduled appointments, and 41% for the walk-in clinic. The center used a quality improvement process and…

  20. Enhanced Time Out: An Improved Communication Process.

    PubMed

    Nelson, Patricia E

    2017-06-01

    An enhanced time out is an improved communication process initiated to prevent such surgical errors as wrong-site, wrong-procedure, or wrong-patient surgery. The enhanced time out at my facility mandates participation from all members of the surgical team and requires designated members to respond to specified time out elements on the surgical safety checklist. The enhanced time out incorporated at my facility expands upon the safety measures from the World Health Organization's surgical safety checklist and ensures that all personnel involved in a surgical intervention perform a final check of relevant information. Initiating the enhanced time out at my facility was intended to improve communication and teamwork among surgical team members and provide a highly reliable safety process to prevent wrong-site, wrong-procedure, and wrong-patient surgery. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  1. A knowledge-based potential with an accurate description of local interactions improves discrimination between native and near-native protein conformations.

    PubMed

    Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco

    2007-01-01

    The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.

  2. The New Aptima HCV Quant Dx Real-time TMA Assay Accurately Quantifies Hepatitis C Virus Genotype 1-6 RNA.

    PubMed

    Chevaliez, Stéphane; Dubernet, Fabienne; Dauvillier, Claude; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-06-01

    Sensitive and accurate hepatitis C virus (HCV) RNA detection and quantification is essential for the management of chronic hepatitis C therapy. Currently available platforms and assays are usually batched and require at least 5hours of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed Aptima HCV Quant Dx assay that eliminates the need for batch processing and automates all aspects of nucleic acid testing in a single step, to accurately detect and quantify HCV RNA in a large series of patients infected with different HCV genotypes. The limit of detection was estimated to be 2.3 IU/mL. The specificity of the assay was 98.6% (95% confidence interval: 96.1%-99.5%). Intra-assay and inter-assay coefficients of variation ranged from 0.09% to 5.61%, and 1.05% to 3.65%, respectively. The study of serum specimens from patients infected with HCV genotypes 1 to 6 showed a satisfactory relationship between HCV RNA levels measured by the Aptima HCV Quant Dx assay, and both real-time PCR comparators (Abbott RealTime HCV and Cobas AmpliPrep/Cobas TaqMan HCV Test, version 2.0, assays). the new Aptima HCV Quant Dx assay is rapid, sensitive, reasonably specific and reproducible and accurately quantifies HCV RNA in serum samples from patients with chronic HCV infection, including patients on antiviral treatment. The Aptima HCV Quant Dx assay can thus be confidently used to detect and quantify HCV RNA in both clinical trials with new anti-HCV drugs and clinical practice in Europe and the US. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Improved centroid moment tensor analyses in the NIED AQUA (Accurate and QUick Analysis system for source parameters)

    NASA Astrophysics Data System (ADS)

    Kimura, H.; Asano, Y.; Matsumoto, T.

    2012-12-01

    The rapid determination of hypocentral parameters and their transmission to the public are valuable components of disaster mitigation. We have operated an automatic system for this purpose—termed the Accurate and QUick Analysis system for source parameters (AQUA)—since 2005 (Matsumura et al., 2006). In this system, the initial hypocenter, the moment tensor (MT), and the centroid moment tensor (CMT) solutions are automatically determined and posted on the NIED Hi-net Web site (www.hinet.bosai.go.jp). This paper describes improvements made to the AQUA to overcome limitations that became apparent after the 2011 Tohoku Earthquake (05:46:17, March 11, 2011 in UTC). The improvements included the processing of NIED F-net velocity-type strong motion records, because NIED F-net broadband seismographs are saturated for great earthquakes such as the 2011 Tohoku Earthquake. These velocity-type strong motion seismographs provide unsaturated records not only for the 2011 Tohoku Earthquake, but also for recording stations located close to the epicenters of M>7 earthquakes. We used 0.005-0.020 Hz records for M>7.5 earthquakes, in contrast to the 0.01-0.05 Hz records employed in the original system. The initial hypocenters determined based on arrival times picked by using seismograms recorded by NIED Hi-net stations can have large errors in terms of magnitude and hypocenter location, especially for great earthquakes or earthquakes located far from the onland Hi-net network. The size of the 2011 Tohoku Earthquake was initially underestimated in the AQUA to be around M5 at the initial stage of rupture. Numerous aftershocks occurred at the outer rise east of the Japan trench, where a great earthquake is anticipated to occur. Hence, we modified the system to repeat the MT analyses assuming a larger size, for all earthquakes for which the magnitude was initially underestimated. We also broadened the search range of centroid depth for earthquakes located far from the onland Hi

  4. If Time Is Brain Where Is the Improvement in Prehospital Time after Stroke?

    PubMed Central

    Pulvers, Jeremy N.; Watson, John D. G.

    2017-01-01

    Despite the availability of thrombolytic and endovascular therapy for acute ischemic stroke, many patients are ineligible due to delayed hospital arrival. The identification of factors related to either early or delayed hospital arrival may reveal potential targets of intervention to reduce prehospital delay and improve access to time-critical thrombolysis and clot retrieval therapy. Here, we have reviewed studies reporting on factors associated with either early or delayed hospital arrival after stroke, together with an analysis of stroke onset to hospital arrival times. Much effort in the stroke treatment community has been devoted to reducing door-to-needle times with encouraging improvements. However, this review has revealed that the median onset-to-door times and the percentage of stroke patients arriving before the logistically critical 3 h have shown little improvement in the past two decades. Major factors affecting prehospital time were related to emergency medical pathways, stroke symptomatology, patient and bystander behavior, patient health characteristics, and stroke treatment awareness. Interventions addressing these factors may prove effective in reducing prehospital delay, allowing prompt diagnosis, which in turn may increase the rates and/or efficacy of acute treatments such as thrombolysis and clot retrieval therapy and thereby improve stroke outcomes. PMID:29209269

  5. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  6. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  7. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that themore » methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.« less

  8. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  9. Numerical Analysis and Improved Algorithms for Lyapunov-Exponent Calculation of Discrete-Time Chaotic Systems

    NASA Astrophysics Data System (ADS)

    He, Jianbin; Yu, Simin; Cai, Jianping

    2016-12-01

    Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

  10. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    NASA Astrophysics Data System (ADS)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  11. Improving TWSTFT short-term stability by network time transfer.

    PubMed

    Tseng, Wen-Hung; Lin, Shinn-Yan; Feng, Kai-Ming; Fujieda, M; Maeno, H

    2010-01-01

    Two-way satellite time and frequency transfer (TWSTFT) is one of the major techniques to compare the atomic time scales between timing laboratories. As more and more TWSTFT measurements have been performed, the large number of point-to-point 2-way time transfer links has grown to be a complex network. For future improvement of the TWSTFT performance, it is important to reduce measurement noise of the TWSTFT results. One method is using TWSTFT network time transfer. The Asia-Pacific network is an exceptional case of simultaneous TWSTFT measurements. Some indirect links through relay stations show better shortterm stabilities than the direct link because the measurement noise may be neutralized in a simultaneous measurement network. In this paper, the authors propose a feasible method to improve the short-term stability by combining the direct and indirect links in the network. Through the comparisons of time deviation (TDEV), the results of network time transfer exhibit clear improved short-term stabilities. For the links used to compare 2 hydrogen masers, the average gain of TDEV at averaging times of 1 h is 22%. As TWSTFT short-term stability can be improved by network time transfer, the network may allow a larger number of simultaneously transmitting stations.

  12. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  13. Enabling high grayscale resolution displays and accurate response time measurements on conventional computers.

    PubMed

    Li, Xiangrui; Lu, Zhong-Lin

    2012-02-29

    Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ (1) and DataPixx (2) use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher (3) described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network (4) and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect

  14. The application of intraoperative transit time flow measurement to accurately assess anastomotic quality in sequential vein grafting

    PubMed Central

    Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong

    2013-01-01

    OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to

  15. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  16. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  17. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  18. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    NASA Astrophysics Data System (ADS)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  19. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  20. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With

  1. QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.

    PubMed

    Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus

    2018-03-01

    Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1  < k). The improved analytical 'quantification of exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    PubMed

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  3. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware

    PubMed Central

    Municio, Esteban; Van de Velde, Bruno; Latré, Steven

    2018-01-01

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks. PMID:29393900

  4. The New Aptima HBV Quant Real-Time TMA Assay Accurately Quantifies Hepatitis B Virus DNA from Genotypes A to F

    PubMed Central

    Dauvillier, Claude; Dubernet, Fabienne; Poveda, Jean-Dominique; Laperche, Syria; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-01-01

    ABSTRACT Sensitive and accurate hepatitis B virus (HBV) DNA detection and quantification are essential to diagnose HBV infection, establish the prognosis of HBV-related liver disease, and guide the decision to treat and monitor the virological response to antiviral treatment and the emergence of resistance. Currently available HBV DNA platforms and assays are generally designed for batching multiple specimens within an individual run and require at least one full day of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed, fully automated, one-step Aptima HBV Quant assay to accurately detect and quantify HBV DNA in a large series of patients infected with different HBV genotypes. The limit of detection of the assay was estimated to be 4.5 IU/ml. The specificity of the assay was 100%. Intra-assay and interassay coefficients of variation ranged from 0.29% to 5.07% and 4.90% to 6.85%, respectively. HBV DNA levels from patients infected with HBV genotypes A to F measured with the Aptima HBV Quant assay strongly correlated with those measured by two commercial real-time PCR comparators (Cobas AmpliPrep/Cobas TaqMan HBV test, version 2.0, and Abbott RealTime HBV test). In conclusion, the Aptima HBV Quant assay is sensitive, specific, and reproducible and accurately quantifies HBV DNA in plasma samples from patients with chronic HBV infections of all genotypes, including patients on antiviral treatment with nucleoside or nucleotide analogues. The Aptima HBV Quant assay can thus confidently be used to detect and quantify HBV DNA in both clinical trials with new anti-HBV drugs and clinical practice. PMID:28202793

  5. Average chewing pattern improvements following Disclusion Time reduction.

    PubMed

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p < 0.000004), the time to 50% peak contraction (p < 0.00001), and in the decreased number of silent periods per side (right p < 0.0000002; left p < 0.0000006). Post-ICAGD ACP changes were also significant; the terminal chewing position became closer to centric occlusion (p < 0.002), the maximum and average chewing velocities increased (p < 0.002; p < 0.00005), the opening and closing times, the cycle time, and the occlusal contact time all decreased (p < 0.004-0.0001). The average chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  6. Accurate measurements of cross-plane thermal conductivity of thin films by dual-frequency time-domain thermoreflectance (TDTR)

    NASA Astrophysics Data System (ADS)

    Jiang, Puqing; Huang, Bin; Koh, Yee Kan

    2016-07-01

    Accurate measurements of the cross-plane thermal conductivity Λcross of a high-thermal-conductivity thin film on a low-thermal-conductivity (Λs) substrate (e.g., Λcross/Λs > 20) are challenging, due to the low thermal resistance of the thin film compared with that of the substrate. In principle, Λcross could be measured by time-domain thermoreflectance (TDTR), using a high modulation frequency fh and a large laser spot size. However, with one TDTR measurement at fh, the uncertainty of the TDTR measurement is usually high due to low sensitivity of TDTR signals to Λcross and high sensitivity to the thickness hAl of Al transducer deposited on the sample for TDTR measurements. We observe that in most TDTR measurements, the sensitivity to hAl only depends weakly on the modulation frequency f. Thus, we performed an additional TDTR measurement at a low modulation frequency f0, such that the sensitivity to hAl is comparable but the sensitivity to Λcross is near zero. We then analyze the ratio of the TDTR signals at fh to that at f0, and thus significantly improve the accuracy of our Λcross measurements. As a demonstration of the dual-frequency approach, we measured the cross-plane thermal conductivity of a 400-nm-thick nickel-iron alloy film and a 3-μm-thick Cu film, both with an accuracy of ˜10%. The dual-frequency TDTR approach is useful for future studies of thin films.

  7. Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations

    DTIC Science & Technology

    2011-09-30

    System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Sequoia Scientific, Inc,2700 Richards Road, Suite 107,Bellevue,WA,98005 8. PERFORMING...EcoLight-S 1.0 Users’ Guide and Technical Documentation. Sequoia Scientific, Inc., Bellevue, WA, 38 pages. Mobley, C. D., 2011. Fast light calculations

  8. Accurate Region-of-Interest Recovery Improves the Measurement of the Cell Migration Rate in the In Vitro Wound Healing Assay.

    PubMed

    Bedoya, Cesar; Cardona, Andrés; Galeano, July; Cortés-Mancera, Fabián; Sandoz, Patrick; Zarzycki, Artur

    2017-12-01

    The wound healing assay is widely used for the quantitative analysis of highly regulated cellular events. In this essay, a wound is voluntarily produced on a confluent cell monolayer, and then the rate of wound reduction (WR) is characterized by processing images of the same regions of interest (ROIs) recorded at different time intervals. In this method, sharp-image ROI recovery is indispensable to compensate for displacements of the cell cultures due either to the exploration of multiple sites of the same culture or to transfers from the microscope stage to a cell incubator. ROI recovery is usually done manually and, despite a low-magnification microscope objective is generally used (10x), repositioning imperfections constitute a major source of errors detrimental to the WR measurement accuracy. We address this ROI recovery issue by using pseudoperiodic patterns fixed onto the cell culture dishes, allowing the easy localization of ROIs and the accurate quantification of positioning errors. The method is applied to a tumor-derived cell line, and the WR rates are measured by means of two different image processing software. Sharp ROI recovery based on the proposed method is found to improve significantly the accuracy of the WR measurement and the positioning under the microscope.

  9. Dual-view inverted selective plane illumination microscopy (diSPIM) with improved background rejection for accurate 3D digital pathology

    NASA Astrophysics Data System (ADS)

    Hu, Bihe; Bolus, Daniel; Brown, J. Quincy

    2018-02-01

    Current gold-standard histopathology for cancerous biopsies is destructive, time consuming, and limited to 2D slices, which do not faithfully represent true 3D tumor micro-morphology. Light sheet microscopy has emerged as a powerful tool for 3D imaging of cancer biospecimens. Here, we utilize the versatile dual-view inverted selective plane illumination microscopy (diSPIM) to render digital histological images of cancer biopsies. Dual-view architecture enabled more isotropic resolution in X, Y, and Z; and different imaging modes, such as adding electronic confocal slit detection (eCSD) or structured illumination (SI), can be used to improve degraded image quality caused by background signal of large, scattering samples. To obtain traditional H&E-like images, we used DRAQ5 and eosin (D&E) staining, with 488nm and 647nm laser illumination, and multi-band filter sets. Here, phantom beads and a D&E stained buccal cell sample have been used to verify our dual-view method. We also show that via dual view imaging and deconvolution, more isotropic resolution has been achieved for optical cleared human prostate sample, providing more accurate quantitation of 3D tumor architecture than was possible with single-view SPIM methods. We demonstrate that the optimized diSPIM delivers more precise analysis of 3D cancer microarchitecture in human prostate biopsy than simpler light sheet microscopy arrangements.

  10. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  11. Alternative predictors in chaotic time series

    NASA Astrophysics Data System (ADS)

    Alves, P. R. L.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2017-06-01

    In the scheme of reconstruction, non-polynomial predictors improve the forecast from chaotic time series. The algebraic manipulation in the Maple environment is the basis for obtaining of accurate predictors. Beyond the different times of prediction, the optional arguments of the computational routines optimize the running and the analysis of global mappings.

  12. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  13. An improved tri-tube cryogenic gravel sampler.

    Treesearch

    Fred H. Everest; Carl E. McLemore; John F. Ward

    1980-01-01

    The tri-tube cryogenic gravel sampler has been improved, and accessories have been developed that increase its reliability and safety of operation, reduce core extraction time, and allow accurate partitioning of cores into subsamples. The improved tri-tube sampler is one of the most versatile and efficient substrate sampling tools yet developed.

  14. Efficient preloading of the ventricles by a properly timed atrial contraction underlies stroke work improvement in the acute response to cardiac resynchronization therapy

    PubMed Central

    Hu, Yuxuan; Gurev, Viatcheslav; Constantino, Jason; Trayanova, Natalia

    2013-01-01

    Background The acute response to cardiac resynchronization therapy (CRT) has been shown to be due to three mechanisms: resynchronization of ventricular contraction, efficient preloading of the ventricles by a properly timed atrial contraction, and mitral regurgitation reduction. However, the contribution of each of the three mechanisms to the acute response of CRT, specifically stroke work improvement, has not been quantified. Objective The goal of this study was to use an MRI-based anatomically accurate 3D model of failing canine ventricular electromechanics to quantify the contribution of each of the three mechanisms to stroke work improvement and identify the predominant mechanisms. Methods An MRI-based electromechanical model of the failing canine ventricles assembled previously by our group was further developed and modified. Three different protocols were used to dissect the contribution of each of the three mechanisms to stroke work improvement. Results Resynchronization of ventricular contraction did not lead to significant stroke work improvement. Efficient preloading of the ventricles by a properly timed atrial contraction was the predominant mechanism underlying stroke work improvement. Stroke work improvement peaked at an intermediate AV delay, as it allowed ventricular filling by atrial contraction to occur at a low diastolic LV pressure but also provided adequate time for ventricular filling before ventricular contraction. Diminution of mitral regurgitation by CRT led to stroke work worsening instead of improvement. Conclusion Efficient preloading of the ventricles by a properly timed atrial contraction is responsible for significant stroke work improvement in the acute CRT response. PMID:23928177

  15. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    NASA Astrophysics Data System (ADS)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well

  16. Accurate T1 mapping of short T2 tissues using a three-dimensional ultrashort echo time cones actual flip angle imaging-variable repetition time (3D UTE-Cones AFI-VTR) method.

    PubMed

    Ma, Ya-Jun; Lu, Xing; Carl, Michael; Zhu, Yanchun; Szeverenyi, Nikolaus M; Bydder, Graeme M; Chang, Eric Y; Du, Jiang

    2018-08-01

    To develop an accurate T 1 measurement method for short T 2 tissues using a combination of a 3-dimensional ultrashort echo time cones actual flip angle imaging technique and a variable repetition time technique (3D UTE-Cones AFI-VTR) on a clinical 3T scanner. First, the longitudinal magnetization mapping function of the excitation pulse was obtained with the 3D UTE-Cones AFI method, which provided information about excitation efficiency and B 1 inhomogeneity. Then, the derived mapping function was substituted into the VTR fitting to generate accurate T 1 maps. Numerical simulation and phantom studies were carried out to compare the AFI-VTR method with a B 1 -uncorrected VTR method, a B 1 -uncorrected variable flip angle (VFA) method, and a B 1 -corrected VFA method. Finally, the 3D UTE-Cones AFI-VTR method was applied to bovine bone samples (N = 6) and healthy volunteers (N = 3) to quantify the T 1 of cortical bone. Numerical simulation and phantom studies showed that the 3D UTE-Cones AFI-VTR technique provides more accurate measurement of the T 1 of short T 2 tissues than the B 1 -uncorrected VTR and VFA methods or the B 1 -corrected VFA method. The proposed 3D UTE-Cones AFI-VTR method showed a mean T 1 of 240 ± 25 ms for bovine cortical bone and 218 ± 10 ms for the tibial midshaft of human volunteers, respectively, at 3 T. The 3D UTE-Cones AFI-VTR method can provide accurate T 1 measurements of short T 2 tissues such as cortical bone. Magn Reson Med 80:598-608, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  17. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  18. Lean intervention improves patient discharge times, improves emergency department throughput and reduces congestion.

    PubMed

    Beck, Michael J; Okerblom, Davin; Kumar, Anika; Bandyopadhyay, Subhankar; Scalzi, Lisabeth V

    2016-12-01

    To determine if a lean intervention improved emergency department (ED) throughput and reduced ED boarding by improving patient discharge efficiency from a tertiary care children's hospital. The study was conducted at a tertiary care children's hospital to study the impact lean that changes made to an inpatient pediatric service line had on ED efficiency. Discharge times from the general pediatrics' service were compared to patients discharged from all other pediatric subspecialty services. The intervention was multifaceted. First, team staffing reconfiguration permitted all discharge work to be done at the patient's bedside using a new discharge checklist. The intervention also incorporated an afternoon interdisciplinary huddle to work on the following day's discharges. Retrospectively, we determined the impact this had on median times of discharge order entry, patient discharge, and percent of patients discharged before noon. As a marker of ED throughput, we determined median hour of day that admitted patients left the ED to move to their hospital bed. As marker of ED congestion we determined median boarding times. For the general pediatrics service line, the median discharge order entry time decreased from 1:43pm to 11:28am (p < 0.0001) and the median time of discharge decreased from 3:25pm to 2:25pm (p < 0.0001). The percent of patients discharged before noon increased from 14.0% to 26.0% (p < 0.0001). The discharge metrics remained unchanged for the pediatric subspecialty services group. Median ED boarding time decreased by 49 minutes (p < 0.0001). As a result, the median time of day admitted patients were discharged from the ED was advanced from 5 PM to 4 PM. Lean principles implemented by one hospital service line improved patient discharge times enhanced patient ED throughput, and reduced ED boarding times.

  19. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  20. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    NASA Astrophysics Data System (ADS)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  1. An improved time-varying mesh stiffness model for helical gear pairs considering axial mesh force component

    NASA Astrophysics Data System (ADS)

    Wang, Qibin; Zhao, Bo; Fu, Yang; Kong, Xianguang; Ma, Hui

    2018-06-01

    An improved time-varying mesh stiffness (TVMS) model of a helical gear pair is proposed, in which the total mesh stiffness contains not only the common transverse tooth bending stiffness, transverse tooth shear stiffness, transverse tooth radial compressive stiffness, transverse gear foundation stiffness and Hertzian contact stiffness, but also the axial tooth bending stiffness, axial tooth torsional stiffness and axial gear foundation stiffness proposed in this paper. In addition, a rapid TVMS calculation method is proposed. Considering each stiffness component, the TVMS can be calculated by the integration along the tooth width direction. Then, three cases are applied to validate the developed model. The results demonstrate that the proposed analytical method is accurate, effective and efficient for helical gear pairs and the axial mesh stiffness should be taken into consideration in the TVMS of a helical gear pair. Finally, influences of the helix angle on TVMS are studied. The results show that the improved TVMS model is effective for any helix angle and the traditional TVMS model is only effective under a small helix angle.

  2. Improvement of a picking algorithm real-time P-wave detection by kurtosis

    NASA Astrophysics Data System (ADS)

    Ishida, H.; Yamada, M.

    2016-12-01

    Earthquake early warning (EEW) requires fast and accurate P-wave detection. The current EEW system in Japan uses the STA/LTAalgorithm (Allen, 1978) to detect P-wave arrival.However, some stations did not trigger during the 2011 Great Tohoku Earthquake due to the emergent onset. In addition, accuracy of the P-wave detection is very important: on August 1, 2016, the EEW issued a false alarm with M9 in Tokyo region due to a thunder noise.To solve these problems, we use a P-wave detection method using kurtosis statistics. It detects the change of statistic distribution of the waveform amplitude. This method was recently developed (Saragiotis et al., 2002) and used for off-line analysis such as making seismic catalogs. To apply this method for EEW, we need to remove an acausal calculation and enable a real-time processing. Here, we propose a real-time P-wave detection method using kurtosis statistics with a noise filter.To avoid false triggering by a noise, we incorporated a simple filter to classify seismic signal and noise. Following Kong et al. (2016), we used the interquartilerange and zero cross rate for the classification. The interquartile range is an amplitude measure that is equal to the middle 50% of amplitude in a certain time window. The zero cross rate is a simple frequency measure that counts the number of times that the signal crosses baseline zero. A discriminant function including these measures was constructed by the linear discriminant analysis.To test this kurtosis method, we used strong motion records for 62 earthquakes between April, 2005 and July, 2015, which recorded the seismic intensity greater equal to 6 lower in the JMA intensity scale. The records with hypocentral distance < 200km were used for the analysis. An attached figure shows the error of P-wave detection speed for STA/LTA and kurtosis methods against manual picks. It shows that the median error is 0.13 sec and 0.035 sec for STA/LTA and kurtosis method. The kurtosis method tends to be

  3. In pursuit of accurate timekeeping: Liverpool and Victorian electrical horology.

    PubMed

    Ishibashi, Yuto

    2014-10-01

    This paper explores how nineteenth-century Liverpool became such an advanced city with regard to public timekeeping, and the wider impact of this on the standardisation of time. From the mid-1840s, local scientists and municipal bodies in the port city were engaged in improving the ways in which accurate time was communicated to ships and the general public. As a result, Liverpool was the first British city to witness the formation of a synchronised clock system, based on an invention by Robert Jones. His method gained a considerable reputation in the scientific and engineering communities, which led to its subsequent replication at a number of astronomical observatories such as Greenwich and Edinburgh. As a further key example of developments in time-signalling techniques, this paper also focuses on the time ball established in Liverpool by the Electric Telegraph Company in collaboration with George Biddell Airy, the Astronomer Royal. This is a particularly significant development because, as the present paper illustrates, one of the most important technologies in measuring the accuracy of the Greenwich time signal took shape in the experimental operation of the time ball. The inventions and knowledge which emerged from the context of Liverpool were vital to the transformation of public timekeeping in Victorian Britain.

  4. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  5. Carbene footprinting accurately maps binding sites in protein-ligand and protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Manzi, Lucio; Barrow, Andrew S.; Scott, Daniel; Layfield, Robert; Wright, Timothy G.; Moses, John E.; Oldham, Neil J.

    2016-11-01

    Specific interactions between proteins and their binding partners are fundamental to life processes. The ability to detect protein complexes, and map their sites of binding, is crucial to understanding basic biology at the molecular level. Methods that employ sensitive analytical techniques such as mass spectrometry have the potential to provide valuable insights with very little material and on short time scales. Here we present a differential protein footprinting technique employing an efficient photo-activated probe for use with mass spectrometry. Using this methodology the location of a carbohydrate substrate was accurately mapped to the binding cleft of lysozyme, and in a more complex example, the interactions between a 100 kDa, multi-domain deubiquitinating enzyme, USP5 and a diubiquitin substrate were located to different functional domains. The much improved properties of this probe make carbene footprinting a viable method for rapid and accurate identification of protein binding sites utilizing benign, near-UV photoactivation.

  6. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    NASA Astrophysics Data System (ADS)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  7. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time

    PubMed Central

    Wang, Zhihua; Yang, Zhaochu; Dong, Tao

    2017-01-01

    Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people’s activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of “smart clothing” system. PMID:28208620

  8. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time.

    PubMed

    Wang, Zhihua; Yang, Zhaochu; Dong, Tao

    2017-02-10

    Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people's activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of "smart clothing" system.

  9. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  10. Validation of a technique for accurate fine-wire electrode placement into posterior gluteus medius using real-time ultrasound guidance.

    PubMed

    Hodges, P W; Kippers, V; Richardson, C A

    1997-01-01

    Fine-wire electromyography is primarily utilised for the recording of activity of the deep musculature, however, due to the location of these muscles, accurate electrode placement is difficult. Real-time ultrasound imaging (RTUI) of muscle tissue has been used for the guidance of the needle insertion for the placement of electrodes into the muscles of the abdominal wall. The validity of RTUI guidance of needle insertion into the deep muscles has not been determined. A cadaveric study was conducted to evaluate the accuracy with which RTUI can be used to guide fine-wire electrode placement using the posterior fibres of gluteus medius (PGM) as an example. Pilot studies revealed that the ultrasound resolution of cadaveric tissue is markedly reduced making it impossible to directly evaluate the technique, therefore, three studies were conducted. An initial study involved the demarcation of the anatomical boundaries of PGM using RTUI to define a technique based on an anatomical landmark that was consisent with the in vivo RTUI guided needle placement technique. This anatomical landmark was then used as the guide for the cadaveric needle insertion. Once the needle was positioned 0.05 ml of dye was introduced and the specimen dissected. The dye was accurately placed in PGM in 100% of the specimens. Finally, fine-wire electrodes were inserted into the PGM of five volunteers and manoeuvres performed indicating the accuracy of placement. This study supports the use of ultrasound imaging for the accurate guidance of needle insertion for fine-wire and needle EMG electrodes.

  11. A New More Accurate Calibration for TIMED/GUVI

    NASA Astrophysics Data System (ADS)

    Schaefer, R. K.; Aiello, J.; Wolven, B. C.; Paxton, L. J.; Romeo, G.; Zhang, Y.

    2017-12-01

    The Global UltraViolet Imager (GUVI - http://guvi.jhuapl.edu) on NASA's TIMED spacecraft has the longest continuous set of observations of the Earth's ionosphere and thermosphere, spanning more than one solar cycle (2001-2017). As such, it represents an important dataset for understanding the dynamics of the Ionosphere-Thermosphere system. The entire dataset has been reprocessed and released as a new version (13) of GUVI data products. This is a complete re-examination of the calibration elements, including better calibrated radiances, better geolocation, and better background subtraction. Details can be found on the GUVI website: http://guvitimed.jhuapl.edu/guvi-Calib_Prod The radiances (except for the LBH long band) in version 13 are within 10% of the original archival radiances and so most of the derived products are little changed from their original versions. The LBH long band was redefined in on-board instrument color tables on Nov., 2, 2004 to better limit contamination from Nitric Oxide emission but this was not updated in ground processing until now. Version 13 LBH Long has 19% smaller radiances than the old calibrated products for post 11/2/2004 data. GUVI auroral products are the only ones that use LBHL - (LBH long)/(LBH short) is used to gauge the amount of intervening oxygen absorption. We will show several examples of the difference between new and old auroral products. Overall version 13 represents a big improvement in the calibration, geolocation, and background of the GUVI UV data products, allowing for the cleanest UV data for analysis of the ionosphere-thermosphere-aurora. An updated "Using GUVI Data Tutorial" will be available from the GUVI webpage to help you navigate to the data you need. Data products are displayed as daily summary and Google Earth files that can be browsed through the Cesium tool on the GUVI website or the image files can be downloaded and viewed through the Google Earth app. The image below shows gridded 135.6 nm radiances

  12. Rapid and accurate identification of Mycobacterium tuberculosis complex and common non-tuberculous mycobacteria by multiplex real-time PCR targeting different housekeeping genes.

    PubMed

    Nasr Esfahani, Bahram; Rezaei Yazdi, Hadi; Moghim, Sharareh; Ghasemian Safaei, Hajieh; Zarkesh Esfahani, Hamid

    2012-11-01

    Rapid and accurate identification of mycobacteria isolates from primary culture is important due to timely and appropriate antibiotic therapy. Conventional methods for identification of Mycobacterium species based on biochemical tests needs several weeks and may remain inconclusive. In this study, a novel multiplex real-time PCR was developed for rapid identification of Mycobacterium genus, Mycobacterium tuberculosis complex (MTC) and the most common non-tuberculosis mycobacteria species including M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and the M. gordonae in three reaction tubes but under same PCR condition. Genetic targets for primer designing included the 16S rDNA gene, the dnaJ gene, the gyrB gene and internal transcribed spacer (ITS). Multiplex real-time PCR was setup with reference Mycobacterium strains and was subsequently tested with 66 clinical isolates. Results of multiplex real-time PCR were analyzed with melting curves and melting temperature (T (m)) of Mycobacterium genus, MTC, and each of non-tuberculosis Mycobacterium species were determined. Multiplex real-time PCR results were compared with amplification and sequencing of 16S-23S rDNA ITS for identification of Mycobacterium species. Sensitivity and specificity of designed primers were each 100 % for MTC, M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and M. gordonae. Sensitivity and specificity of designed primer for genus Mycobacterium was 96 and 100 %, respectively. According to the obtained results, we conclude that this multiplex real-time PCR with melting curve analysis and these novel primers can be used for rapid and accurate identification of genus Mycobacterium, MTC, and the most common non-tuberculosis Mycobacterium species.

  13. Screening of 485 Pesticide Residues in Fruits and Vegetables by Liquid Chromatography-Quadrupole-Time-of-Flight Mass Spectrometry Based on TOF Accurate Mass Database and QTOF Spectrum Library.

    PubMed

    Pang, Guo-Fang; Fan, Chun-Lin; Chang, Qiao-Ying; Li, Jian-Xun; Kang, Jian; Lu, Mei-Ling

    2018-03-22

    This paper uses the LC-quadrupole-time-of-flight MS technique to evaluate the behavioral characteristics of MSof 485 pesticides under different conditions and has developed an accurate mass database and spectra library. A high-throughput screening and confirmation method has been developed for the 485 pesticides in fruits and vegetables. Through the optimization of parameters such as accurate mass number, time of retention window, ionization forms, etc., the method has improved the accuracy of pesticide screening, thus avoiding the occurrence of false-positive and false-negative results. The method features a full scan of fragments, with 80% of pesticide qualitative points over 10, which helps increase pesticide qualitative accuracy. The abundant differences of fragment categories help realize the effective separation and qualitative identification of isomer pesticides. Four different fruits and vegetables-apples, grapes, celery, and tomatoes-were chosen to evaluate the efficiency of the method at three fortification levels of 5, 10, and 20 μg/kg, and satisfactory results were obtained. With this method, a national survey of pesticide residues was conducted between 2012 and 2015 for 12 551 samples of 146 different fruits and vegetables collected from 638 sampling points in 284 counties across 31 provincial capitals/cities directly under the central government, which provided scientific data backup for ensuring pesticide residue safety of the fruits and vegetables consumed daily by the public. Meanwhile, the big data statistical analysis of the new technique also further proves it to be of high speed, high throughput, high accuracy, high reliability, and high informatization.

  14. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. II. Measurement of negative radical ions using porphyrin and fullerene standard reference materials.

    PubMed

    Shao, Zhecheng; Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2010-10-30

    A method for the accurate mass measurement of negative radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. This is an extension to our previously described method for the accurate mass measurement of positive radical ions (Griffiths NW, Wyatt MF, Kean SD, Graham AE, Stein BK, Brenton AG. Rapid Commun. Mass Spectrom. 2010; 24: 1629). The porphyrin standard reference materials (SRMs) developed for positive mode measurements cannot be observed in negative ion mode, so fullerene and fluorinated porphyrin compounds were identified as effective SRMs. The method is of immediate practical use for the accurate mass measurement of functionalised fullerenes, for which negative ion MALDI-TOFMS is the principal mass spectrometry characterisation technique. This was demonstrated by the accurate mass measurement of six functionalised C(60) compounds. Copyright © 2010 John Wiley & Sons, Ltd.

  15. SU-D-18C-05: Variable Bolus Arterial Spin Labeling MRI for Accurate Cerebral Blood Flow and Arterial Transit Time Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, M; Jung, Y

    2014-06-01

    Purpose: Arterial spin labeling (ASL) is an MRI perfusion imaging method from which quantitative cerebral blood flow (CBF) maps can be calculated. Acquisition with variable post-labeling delays (PLD) and variable TRs allows for arterial transit time (ATT) mapping and leads to more accurate CBF quantification with a scan time saving of 48%. In addition, T1 and M0 maps can be obtained without a separate scan. In order to accurately estimate ATT and T1 of brain tissue from the ASL data, variable labeling durations were invented, entitled variable-bolus ASL. Methods: All images were collected on a healthy subject with a 3Tmore » Siemens Skyra scanner. Variable-bolus Psuedo-continuous ASL (PCASL) images were collected with 7 TI times ranging 100-4300ms in increments of 700ms with TR ranging 1000-5200ms. All boluses were 1600ms when the TI allowed, otherwise the bolus duration was 100ms shorter than the TI. All TI times were interleaved to reduce sensitivity to motion. Voxel-wise T1 and M0 maps were estimated using a linear least squares fitting routine from the average singal from each TI time. Then pairwise subtraction of each label/control pair and averaging for each TI time was performed. CBF and ATT maps were created using the standard model by Buxton et al. with a nonlinear fitting routine using the T1 tissue map. Results: CBF maps insensitive to ATT were produced along with ATT maps. Both maps show patterns and averages consistent with literature. The T1 map also shows typical T1 contrast. Conclusion: It has been demonstrated that variablebolus ASL produces CBF maps free from the errors due to ATT and tissue T1 variations and provides M0, T1, and ATT maps which have potential utility. This is accomplished with a single scan in a feasible scan time (under 6 minutes) with low sensivity to motion.« less

  16. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In

  17. Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, D. E.; Anderson, G. L.; Macon, D. J.

    2002-01-01

    An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.

  18. Time-Driven Activity-Based Costing in Emergency Medicine.

    PubMed

    Yun, Brian J; Prabhakar, Anand M; Warsh, Jonathan; Kaplan, Robert; Brennan, John; Dempsey, Kyle E; Raja, Ali S

    2016-06-01

    Value in emergency medicine is determined by both patient-important outcomes and the costs associated with achieving them. However, measuring true costs is challenging. Without an understanding of costs, emergency department (ED) leaders will be unable to determine which interventions might improve value for their patients. Although ongoing research may determine which outcomes are meaningful, an accurate costing system is also needed. This article reviews current costing mechanisms in the ED and their pitfalls. It then describes how time-driven activity-based costing may be superior to these current costing systems. Time-driven activity-based costing, in addition to being a more accurate costing system, can be used for process improvements in the ED. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  19. Decision peptide-driven: a free software tool for accurate protein quantification using gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry.

    PubMed

    Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L

    2010-09-15

    The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. Designs towards improved coherence times in superconducting qubits

    NASA Astrophysics Data System (ADS)

    Corcoles, Antonio; Chow, Jerry; Gambetta, Jay; Rigetti, Chad; Rozen, Jim; Keefe, George; Rothwell, Mary Beth; Poletto, Stefano; Ketchen, Mark; Steffen, Matthias

    2012-02-01

    Coherence times for superconducting qubits in a planar geometry have increased drastically over the past 10 years with improvements exceeding a factor of 1000. However, recently these appeared to have reached a plateau around 1-2 microseconds, the limits of which were not well understood. Here, we present experimental data showing that one limit is due to infra-red radiation, confirming observations from other groups. We observe increased coherence times after appropriate IR shielding. Further improvements are shown to be possible by increasing the feature size of the interdigitated shunting capacitor, strongly indicating that surface losses at the metal/substrate interface are limiting qubit coherence times. In our experiments we kept the ratio of line width to gap size constant, but increased the overall feature size. We will discuss this and other similar design approaches towards better coherence in superconducting qubits.

  1. Real-time feedback can improve infant manikin cardiopulmonary resuscitation by up to 79%--a randomised controlled trial.

    PubMed

    Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael

    2013-08-01

    European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  3. Children’s Processing and Comprehension of Complex Sentences Containing Temporal Connectives: The Influence of Memory on the Time Course of Accurate Responses

    PubMed Central

    2016-01-01

    In a touch-screen paradigm, we recorded 3- to 7-year-olds’ (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing before and after. Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order of events: “She drank the juice, before she walked in the park” (chronological order) versus “Before she walked in the park, she drank the juice” (reverse order). Differences in RTs for correct responses varied by sentence type: accurate responses were made more speedily for sentences that afforded an incremental processing of meaning. An independent measure of memory predicted this pattern of performance. We discuss these findings in relation to children’s knowledge of connective meaning and the processing requirements of sentences containing temporal connectives. PMID:27690492

  4. Accurate detection of blood vessels improves the detection of exudates in color fundus images.

    PubMed

    Youssef, Doaa; Solouma, Nahed H

    2012-12-01

    Exudates are one of the earliest and most prevalent symptoms of diseases leading to blindness such as diabetic retinopathy and macular degeneration. Certain areas of the retina with such conditions are to be photocoagulated by laser to stop the disease progress and prevent blindness. Outlining these areas is dependent on outlining the lesions and the anatomic structures of the retina. In this paper, we provide a new method for the detection of blood vessels that improves the detection of exudates in fundus photographs. The method starts with an edge detection algorithm which results in a over segmented image. Then the new feature-based algorithm can be used to accurately detect the blood vessels. This algorithm considers the characteristics of a retinal blood vessel such as its width range, intensities and orientations for the purpose of selective segmentation. Because of its bulb shape and its color similarity with exudates, the optic disc can be detected using the common Hough transform technique. The extracted blood vessel tree and optic disc could be subtracted from the over segmented image to get an initial estimate of exudates. The final estimation of exudates can then be obtained by morphological reconstruction based on the appearance of exudates. This method is shown to be promising since it increases the sensitivity and specificity of exudates detection to 80% and 100% respectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Molecular Dynamics in Mixed Solvents Reveals Protein-Ligand Interactions, Improves Docking, and Allows Accurate Binding Free Energy Predictions.

    PubMed

    Arcon, Juan Pablo; Defelipe, Lucas A; Modenutti, Carlos P; López, Elias D; Alvarez-Garcia, Daniel; Barril, Xavier; Turjanski, Adrián G; Martí, Marcelo A

    2017-04-24

    One of the most important biological processes at the molecular level is the formation of protein-ligand complexes. Therefore, determining their structure and underlying key interactions is of paramount relevance and has direct applications in drug development. Because of its low cost relative to its experimental sibling, molecular dynamics (MD) simulations in the presence of different solvent probes mimicking specific types of interactions have been increasingly used to analyze protein binding sites and reveal protein-ligand interaction hot spots. However, a systematic comparison of different probes and their real predictive power from a quantitative and thermodynamic point of view is still missing. In the present work, we have performed MD simulations of 18 different proteins in pure water as well as water mixtures of ethanol, acetamide, acetonitrile and methylammonium acetate, leading to a total of 5.4 μs simulation time. For each system, we determined the corresponding solvent sites, defined as space regions adjacent to the protein surface where the probability of finding a probe atom is higher than that in the bulk solvent. Finally, we compared the identified solvent sites with 121 different protein-ligand complexes and used them to perform molecular docking and ligand binding free energy estimates. Our results show that combining solely water and ethanol sites allows sampling over 70% of all possible protein-ligand interactions, especially those that coincide with ligand-based pharmacophoric points. Most important, we also show how the solvent sites can be used to significantly improve ligand docking in terms of both accuracy and precision, and that accurate predictions of ligand binding free energies, along with relative ranking of ligand affinity, can be performed.

  6. Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery.

    PubMed

    Engelsman, M; Lu, H M; Herrup, D; Bussiere, M; Kooy, H M

    2009-06-01

    Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output.

  7. Commissioning a passive-scattering proton therapy nozzle for accurate SOBP delivery

    PubMed Central

    Engelsman, M.; Lu, H.-M.; Herrup, D.; Bussiere, M.; Kooy, H. M.

    2009-01-01

    Proton radiotherapy centers that currently use passively scattered proton beams do field specific calibrations for a non-negligible fraction of treatment fields, which is time and resource consuming. Our improved understanding of the passive scattering mode of the IBA universal nozzle, especially of the current modulation function, allowed us to re-commission our treatment control system for accurate delivery of SOBPs of any range and modulation, and to predict the output for each of these fields. We moved away from individual field calibrations to a state where continued quality assurance of SOBP field delivery is ensured by limited system-wide measurements that only require one hour per week. This manuscript reports on a protocol for generation of desired SOBPs and prediction of dose output. PMID:19610306

  8. Standardizing a simpler, more sensitive and accurate tail bleeding assay in mice

    PubMed Central

    Liu, Yang; Jennings, Nicole L; Dart, Anthony M; Du, Xiao-Jun

    2012-01-01

    AIM: To optimize the experimental protocols for a simple, sensitive and accurate bleeding assay. METHODS: Bleeding assay was performed in mice by tail tip amputation, immersing the tail in saline at 37 °C, continuously monitoring bleeding patterns and measuring bleeding volume from changes in the body weight. Sensitivity and extent of variation of bleeding time and bleeding volume were compared in mice treated with the P2Y receptor inhibitor prasugrel at various doses or in mice deficient of FcRγ, a signaling protein of the glycoprotein VI receptor. RESULTS: We described details of the bleeding assay with the aim of standardizing this commonly used assay. The bleeding assay detailed here was simple to operate and permitted continuous monitoring of bleeding pattern and detection of re-bleeding. We also reported a simple and accurate way of quantifying bleeding volume from changes in the body weight, which correlated well with chemical assay of hemoglobin levels (r2 = 0.990, P < 0.0001). We determined by tail bleeding assay the dose-effect relation of the anti-platelet drug prasugrel from 0.015 to 5 mg/kg. Our results showed that the correlation of bleeding time and volume was unsatisfactory and that compared with the bleeding time, bleeding volume was more sensitive in detecting a partial inhibition of platelet’s haemostatic activity (P < 0.01). Similarly, in mice with genetic disruption of FcRγ as a signaling molecule of P-selectin glycoprotein ligand-1 leading to platelet dysfunction, both increased bleeding volume and repeated bleeding pattern defined the phenotype of the knockout mice better than that of a prolonged bleeding time. CONCLUSION: Determination of bleeding pattern and bleeding volume, in addition to bleeding time, improved the sensitivity and accuracy of this assay, particularly when platelet function is partially inhibited. PMID:24520531

  9. Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery

    PubMed Central

    Sivaraks, Haemwaan

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284

  10. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Proposition of novel classification approach and features for improved real-time arrhythmia monitoring.

    PubMed

    Kim, Yoon Jae; Heo, Jeong; Park, Kwang Suk; Kim, Sungwan

    2016-08-01

    Arrhythmia refers to a group of conditions in which the heartbeat is irregular, fast, or slow due to abnormal electrical activity in the heart. Some types of arrhythmia such as ventricular fibrillation may result in cardiac arrest or death. Thus, arrhythmia detection becomes an important issue, and various studies have been conducted. Additionally, an arrhythmia detection algorithm for portable devices such as mobile phones has recently been developed because of increasing interest in e-health care. This paper proposes a novel classification approach and features, which are validated for improved real-time arrhythmia monitoring. The classification approach that was employed for arrhythmia detection is based on the concept of ensemble learning and the Taguchi method and has the advantage of being accurate and computationally efficient. The electrocardiography (ECG) data for arrhythmia detection was obtained from the MIT-BIH Arrhythmia Database (n=48). A novel feature, namely the heart rate variability calculated from 5s segments of ECG, which was not considered previously, was used. The novel classification approach and feature demonstrated arrhythmia detection accuracy of 89.13%. When the same data was classified using the conventional support vector machine (SVM), the obtained accuracy was 91.69%, 88.14%, and 88.74% for Gaussian, linear, and polynomial kernels, respectively. In terms of computation time, the proposed classifier was 5821.7 times faster than conventional SVM. In conclusion, the proposed classifier and feature showed performance comparable to those of previous studies, while the computational complexity and update interval were highly reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  13. Improved hybrid information filtering based on limited time window

    NASA Astrophysics Data System (ADS)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  14. Determination of doping peptides via solid-phase microelution and accurate-mass quadrupole time-of-flight LC-MS.

    PubMed

    Cuervo, Darío; Loli, Cynthia; Fernández-Álvarez, María; Muñoz, Gloria; Carreras, Daniel

    2017-10-15

    A complete analytical protocol for the determination of 25 doping-related peptidic drugs and 3 metabolites in urine was developed by means of accurate-mass quadrupole time-of-flight (Q-TOF) LC-MS analysis following solid-phase extraction (SPE) on microplates and conventional SPE pre-treatment for initial testing and confirmation, respectively. These substances included growth hormone releasing factors, gonadotropin releasing factors and anti-diuretic hormones, with molecular weights ranging from 540 to 1320Da. Optimal experimental conditions were stablished after investigation of different parameters concerning sample preparation and instrumental analysis. Weak cation exchange SPE followed by C18 HPLC chromatography and accurate mass detection provided the required sensitivity and selectivity for all the target peptides under study. 2mg SPE on 96-well microplates can be used in combination with full scan MS detection for the initial testing, thus providing a fast, cost-effective and high-throughput protocol for the processing of a large batch of samples simultaneously. On the other hand, extraction on 30mg SPE cartridges and subsequent target MS/MS determination was the protocol of choice for confirmatory purposes. The methodology was validated in terms of selectivity, recovery, matrix effect, precision, sensitivity (limit of detection, LOD), cross contamination, carryover, robustness and stability. Recoveries ranged from 6 to 70% (microplates) and 17-95% (cartridges), with LODs from 0.1 to 1ng/mL. The suitability of the method was assessed by analyzing different spiked or excreted urines containing some of the target substances. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data

    PubMed Central

    2017-01-01

    Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr. PMID:28821014

  16. Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification.

    PubMed

    Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun

    2016-06-01

    Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.

  17. Improved Space-Time Forecasting of next Day Ozone Concentrations in the Eastern U.S.

    EPA Science Inventory

    There is an urgent need to provide accurate air quality information and forecasts to the general public and environmental health decision-makers. This paper develops a hierarchical space-time model for daily 8-hour maximum ozone concentration (O3) data covering much of the easter...

  18. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    NASA Astrophysics Data System (ADS)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  19. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  20. Improving Retention and Enrollment Forecasting in Part-Time Programs

    ERIC Educational Resources Information Center

    Shapiro, Joel; Bray, Christopher

    2011-01-01

    This article describes a model that can be used to analyze student enrollment data and can give insights for improving retention of part-time students and refining institutional budgeting and planning efforts. Adult higher-education programs are often challenged in that part-time students take courses less reliably than full-time students. For…

  1. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  2. Improving efficiency and decreasing scanning time of sonographic examination of the shoulder by using a poster illustrating proper shoulder positioning to the patient.

    PubMed

    Shah, Amit; Amin, Maslah; Srinivasan, Sriram; Botchu, Rajesh

    2015-09-01

    Patients often have difficulty performing the various movements required for ideal positioning to enable accurate sonographic (US) assessment of the shoulder; this may result from pain and or unclear oral instructions. We performed this study to ascertain whether the use of a poster depicting the positions required during the examination would decrease scanning time and hence improve the overall efficiency of shoulder US. We retrospectively compared results from 50 consecutive patients who underwent US examination without (group 1) and 50 with (group 2) the use of an illustrative poster produced by the European Society of Musculoskeletal Radiology. The difference in mean scanning time between the two groups was analyzed with Student's two-tailed t test. There was a statistically significant difference in scanning time between the two groups (group 1: 3 minutes and 5 seconds versus group 2: 2 minutes and 9 seconds; p < 0.0001). The patients in group 2, especially those who had hearing difficulty, found the poster useful. The use of a poster illustrating positioning of the shoulder during an US examination is an effective way to improve patient compliance and significantly decreases scanning time. © 2014 Wiley Periodicals, Inc.

  3. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    NASA Astrophysics Data System (ADS)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  4. Improving Operating Room Efficiency: First Case On-Time Start Project.

    PubMed

    Phieffer, Laura; Hefner, Jennifer L; Rahmanian, Armin; Swartz, Jason; Ellison, Christopher E; Harter, Ronald; Lumbley, Joshua; Moffatt-Bruce, Susan D

    Operating rooms (ORs) are costly to run, and multiple factors influence efficiency. The first case on-time start (FCOS) of an OR is viewed as a harbinger of efficiency for the daily schedule. Across 26 ORs of a large, academic medical center, only 49% of cases started on time in October 2011. The Perioperative Services Department engaged an interdisciplinary Operating Room Committee to apply Six Sigma tools to this problem. The steps of this project included (1) problem mapping, (2) process improvements to preoperative readiness, (3) informatics support improvements, and (4) continuous measurement and feedback. By June 2013, there was a peak of 92% first case on-time starts across service lines, decreasing to 78% through 2014, still significantly above the preintervention level of 49% (p = .000). Delay minutes also significantly decreased through the study period (p = .000). Across 2013, the most common delay owners were the patient, the surgeon, the facility, and the anesthesia department. Continuous and sustained improvement of first case on-time starts is attributed to tracking the FCOS metric, establishing embedded process improvement resources and creating transparency of data. This article highlights success factors and barriers to program success and sustainability.

  5. Indexed variation graphs for efficient and accurate resistome profiling.

    PubMed

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  6. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  7. An improved RST approach for timely alert and Near Real Time monitoring of oil spill disasters by using AVHRR data

    NASA Astrophysics Data System (ADS)

    Grimaldi, C. S. L.; Casciello, D.; Coviello, I.; Lacava, T.; Pergola, N.; Tramutoli, V.

    2011-05-01

    Information acquired and provided in Near Real Time is fundamental in contributing to reduce the impact of different sea pollution sources on the maritime environment. Optical data acquired by sensors aboard meteorological satellites, thanks to their high temporal resolution as well as to their delivery policy, can be profitably used for a Near Real Time sea monitoring, provided that accurate and reliable methodologies for analysis and investigation are designed, implemented and fully assessed. In this paper, the results achieved by the application of an improved version of RST (Robust Satellite Technique) to oil spill detection and monitoring will be shown. In particular, thermal infrared data acquired by the NOAA-AVHRR (National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer) have been analyzed and a new RST-based change detection index applied to the case of the oil spills that occurred off the Kuwait and Saudi Arabian coasts in January 1991 and during the Lebanon War in July 2006. The results obtained, even in comparison with those achieved by other AVHRR-based techniques, confirm the unique performance of the proposed approach in automatically detecting the presence of oil spill with a high level of reliability and sensitivity. Moreover, the potential of the extension of the proposed technique to sensors onboard geostationary satellites will be discussed within the context of oil spill monitoring systems, integrating products generated by high temporal (optical) and high spatial (radar) resolution satellite systems.

  8. High-accurate optical fiber liquid level sensor

    NASA Astrophysics Data System (ADS)

    Sun, Dexing; Chen, Shouliu; Pan, Chao; Jin, Henghuan

    1991-08-01

    A highly accurate optical fiber liquid level sensor is presented. The single-chip microcomputer is used to process and control the signal. This kind of sensor is characterized by self-security and is explosion-proof, so it can be applied in any liquid level detecting areas, especially in the oil and chemical industries. The theories and experiments about how to improve the measurement accuracy are described. The relative error for detecting the measurement range 10 m is up to 0.01%.

  9. Improving Time Management for the Working Student.

    ERIC Educational Resources Information Center

    Anderson, Tim; Lott, Rod; Wieczorek, Linda

    This action research project implemented and evaluated a program for increasing time spent on homework. The project was intended to improve academic achievement among five employed high school students taking geometry and physical science who were also employed more than 15 hours per week. The problem of lower academic achievement due to…

  10. In situ accurate determination of the zero time delay between two independent ultrashort laser pulses by observing the oscillation of an atomic excited wave packet.

    PubMed

    Zhang, Qun; Hepburn, John W

    2008-08-15

    We propose a novel method that uses the oscillation of an atomic excited wave packet observed through a pump-probe technique to accurately determine the zero time delay between a pair of ultrashort laser pulses. This physically based approach provides an easy fix for the intractable problem of synchronizing two different femtosecond laser pulses in a practical experimental environment, especially where an in situ time zero measurement with high accuracy is required.

  11. Improving Reports Turnaround Time: An Essential Healthcare Quality Dimension.

    PubMed

    Khan, Mustafa; Khalid, Parwaiz; Al-Said, Youssef; Cupler, Edward; Almorsy, Lamia; Khalifa, Mohamed

    2016-01-01

    Turnaround time is one of the most important healthcare performance indicators. King Faisal Specialist Hospital and Research Center in Jeddah, Saudi Arabia worked on reducing the reports turnaround time of the neurophysiology lab from more than two weeks to only five working days for 90% of cases. The main quality improvement methodology used was the FOCUS PDCA. Using root cause analysis, Pareto analysis and qualitative survey methods, the main factors contributing to the delay of turnaround time and the suggested improvement strategies were identified and implemented, through restructuring transcriptionists daily tasks, rescheduling physicians time and alerting for new reports, engaging consultants, consistent coordination and prioritizing critical reports. After implementation; 92% of reports are verified within 5 days compared to only 6% before implementation. 7% of reports were verified in 5 days to 2 weeks and only 1% of reports needed more than 2 weeks compared to 76% before implementation.

  12. Land cover change mapping using MODIS time series to improve emissions inventories

    NASA Astrophysics Data System (ADS)

    López-Saldaña, Gerardo; Quaife, Tristan; Clifford, Debbie

    2016-04-01

    MELODIES is an FP7 funded project to develop innovative and sustainable services, based upon Open Data, for users in research, government, industry and the general public in a broad range of societal and environmental benefit areas. Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A land cover product was created for 2003 to 2015 and a bayesian approach was created to identified land cover changes. We will present the results of the time series development and the first exercises when creating the land cover and land cover changes products.

  13. Highly accurate detection of ovarian cancer using CA125 but limited improvement with serum matrix-assisted laser desorption/ionization time-of-flight mass spectrometry profiling.

    PubMed

    Tiss, Ali; Timms, John F; Smith, Celia; Devetyarov, Dmitry; Gentry-Maharaj, Aleksandra; Camuzeaux, Stephane; Burford, Brian; Nouretdinov, Ilia; Ford, Jeremy; Luo, Zhiyuan; Jacobs, Ian; Menon, Usha; Gammerman, Alex; Cramer, Rainer

    2010-12-01

    Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with

  14. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Investigation into accurate mass capability of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, with respect to radical ion species.

    PubMed

    Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth

    2006-05-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) has been shown to be an effective technique for the characterization of organometallic, coordination, and highly conjugated compounds. The preferred matrix is 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene]malononitrile (DCTB), with radical ions observed. However, MALDI-TOFMS is generally not favored for accurate mass measurement. A specific method had to be developed for such compounds to assure the quality of our accurate mass results. Therefore, in this preliminary study, two methods of data acquisition, and both even-electron (EE+) ion and odd-electron (OE+.) radical ion mass calibration standards, have been investigated to establish the basic measurement technique. The benefit of this technique is demonstrated for a copper compound for which ions were observed by MALDI, but not by electrospray (ESI) or liquid secondary ion mass spectrometry (LSIMS); a mean mass accuracy error of -1.2 ppm was obtained.

  16. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  17. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  18. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  19. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  20. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  1. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    PubMed Central

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-01-01

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs. PMID:27589753

  2. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  3. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  4. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C.

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation ofmore » their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.« less

  5. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearny, C.H.

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy ofmore » {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify

  6. Acute physical exercise under hypoxia improves sleep, mood and reaction time.

    PubMed

    de Aquino-Lemos, Valdir; Santos, Ronaldo Vagner T; Antunes, Hanna Karen Moreira; Lira, Fabio S; Luz Bittar, Irene G; Caris, Aline V; Tufik, Sergio; de Mello, Marco Tulio

    2016-02-01

    This study aimed to assess the effect of two sessions of acute physical exercise at 50% VO2peak performed under hypoxia (equivalent to an altitude of 4500 m for 28 h) on sleep, mood and reaction time. Forty healthy men were randomized into 4 groups: Normoxia (NG) (n = 10); Hypoxia (HG) (n = 10); Exercise under Normoxia (ENG) (n = 10); and Exercise under Hypoxia (EHG) (n = 10). All mood and reaction time assessments were performed 40 min after awakening. Sleep was reassessed on the first day at 14 h after the initiation of hypoxia; mood and reaction time were measured 28 h later. Two sessions of acute physical exercise at 50% VO2peak were performed for 60 min on the first and second days after 3 and 27 h, respectively, after starting to hypoxia. Improved sleep efficiency, stage N3 and REM sleep and reduced wake after sleep onset were observed under hypoxia after acute physical exercise. Tension, anger, depressed mood, vigor and reaction time scores improved after exercise under hypoxia. We conclude that hypoxia impairs sleep, reaction time and mood. Acute physical exercise at 50% VO2peak under hypoxia improves sleep efficiency, reversing the aspects that had been adversely affected under hypoxia, possibly contributing to improved mood and reaction time.

  7. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  8. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  9. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions?

    PubMed

    Hassan, Shirin E

    2012-05-04

    The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.

  10. Productivity improvement through cycle time analysis

    NASA Astrophysics Data System (ADS)

    Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio

    1996-09-01

    A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.

  11. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  12. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    PubMed

    Mellas, John

    2016-05-01

    were 96 measurements in six different patients where Ka was compared to Ke. The estimated proportion of Ke within 30% of Ka was 0.907 with 95% exact binomial proportion confidence limits. The predictive accuracy of E/P in the study patients was also reported as a proportion and the associated 95% confidence limits: 0.848 (0.800, 0.896) for E/P<1; 0.939 (0.904, 0.974) for E/P>1 and 0.907 (0.841, 0.973) for 0.95 ml/min accurately predicted the ability to terminate renal replacement therapy in AKI. Include the need to measure urine volume accurately. Furthermore the precision of the method requires accurate estimates of sGFR, while a reasonable measure of P is crucial to estimating Ke. The present study provides the practitioner with a new tool to estimate real time K in AKI with enough precision to predict the severity of the renal injury, including progression, stabilization, or improvement in azotemia. It is the author's belief that this simple method improves on RIFLE, AKIN, and KDIGO for estimating the degree of renal impairment in AKI and allows a more accurate estimate of K in AKI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Improving Autopsy Report Turnaround Times by Implementing Lean Management Principles.

    PubMed

    Cromwell, Susan; Chiasson, David A; Cassidy, Debra; Somers, Gino R

    2018-01-01

    The autopsy is an integral part of the service of a large academic pathology department. Timely reporting is central to providing good service and is beneficial for many stakeholders, including the families, the clinical team, the hospital, and the wider community. The current study aimed to improve hospital-consented autopsy reporting times (turnaround time, TAT) by using lean principles modified for a healthcare setting, with an aim of signing out 90% of autopsies in 90 days. An audit of current and historical TATs was performed, and a working group incorporating administrative, technical, and professional staff constructed a value stream map documenting the steps involved in constructing an autopsy report. Two areas of delay were noted: examination of the microscopy and time taken to sign-out the report after the weekly autopsy conference. Several measures were implemented to address these delays, including visual tracking using a whiteboard and individualized tracking sheets, weekly whiteboard huddles, and timelier scheduling of clinicopathologic conference rounds. All measures resulted in an improvement of TATs. In the 30 months prior to the institution of lean, 37% of autopsies (53/144) were signed out in 90 days, with a wide variation in reporting times. In the 30 months following the institution of lean, this improved to 74% (136/185) ( P < .0001, Fisher exact test), with a marked reduction in variability. Further, the time from autopsy to presentation at weekly clinicopathological rounds was also reduced (median: 73 days prior to lean; 63 days post-lean). The application of lean principles to autopsy sign-out workflow can significantly improve TATs and reduce variability, without changing staffing levels or significantly altering scheduling structure.

  14. Which Clinician Questions Elicit Accurate Disclosure of Antiretroviral Non-adherence When Talking to Patients?

    PubMed

    Callon, Wynne; Saha, Somnath; Korthuis, P Todd; Wilson, Ira B; Moore, Richard D; Cohn, Jonathan; Beach, Mary Catherine

    2016-05-01

    This study evaluated how clinicians assess antiretroviral (ARV) adherence in clinical encounters, and which questions elicit accurate responses. We conducted conversation analysis of audio-recorded encounters between 34 providers and 58 patients reporting ARV non-adherence in post-encounter interviews. Among 42 visits where adherence status was unknown by providers, 4 providers did not discuss ARVs (10 %), 6 discussed ARVs but did not elicit non-adherence disclosure (14 %), and 32 discussed ARVs which prompted disclosure (76 %). Questions were classified as: (1) clarification of medication ("Are you still taking the Combivir?"); (2) broad ("How's it going with your meds?"); (3) positively-framed ("Are you taking your medications regularly?"); (4) negatively-framed ("Have you missed any doses?"). Clinicians asked 75 ARV-related questions: 23 clarification, 12 broad, 17 positively-framed, and 23 negatively-framed. Negatively-framed questions were 3.8 times more likely to elicit accurate disclosure than all other question types (p < 0.0001). Providers can improve disclosure probability by asking directly about missed doses.

  15. Course Development Cycle Time: A Framework for Continuous Process Improvement.

    ERIC Educational Resources Information Center

    Lake, Erinn

    2003-01-01

    Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)

  16. Improved geomagnetic referencing in the Arctic environment

    USGS Publications Warehouse

    Poedjono, B.; Beck, N.; Buchanan, A. C.; Borri, L.; Maus, S.; Finn, Carol; Worthington, E. William; White, Tim

    2016-01-01

    Geomagnetic referencing uses the Earth’s magnetic field to determine accurate wellbore positioning essential for success in today's complex drilling programs, either as an alternative or a complement to north-seeking gyroscopic referencing. However, fluctuations in the geomagnetic field, especially at high latitudes, make the application of geomagnetic referencing in those areas more challenging. Precise crustal mapping and the monitoring of real-time variations by nearby magnetic observatories is crucial to achieving the required geomagnetic referencing accuracy. The Deadhorse Magnetic Observatory (DED), located at Prudhoe Bay, Alaska, has already played a vital role in the success of several commercial ventures in the area, providing essential, accurate, real-time data to the oilfield drilling industry. Geomagnetic referencing is enhanced with real-time data from DED and other observatories, and has been successfully used for accurate wellbore positioning. The availability of real-time geomagnetic measurements leads to significant cost and time savings in wellbore surveying, improving accuracy and alleviating the need for more expensive surveying techniques. The correct implementation of geomagnetic referencing is particularly critical as we approach the increased activity associated with the upcoming maximum of the 11-year solar cycle. The DED observatory further provides an important service to scientific communities engaged in studies of ionospheric, magnetospheric and space weather phenomena.

  17. Measuring cross-border travel times for freight : Otay Mesa international border crossing.

    DOT National Transportation Integrated Search

    2010-09-01

    Cross border movement of people and goods is a vital part of the North American economy. Accurate real-time data on travel times along the US-Mexico border can help generate a range of tangible benefits covering improved operations and security, lowe...

  18. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  19. Fast and accurate mock catalogue generation for low-mass galaxies

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe

    2016-06-01

    We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.

  20. The New Aptima HBV Quant Real-Time TMA Assay Accurately Quantifies Hepatitis B Virus DNA from Genotypes A to F.

    PubMed

    Chevaliez, Stéphane; Dauvillier, Claude; Dubernet, Fabienne; Poveda, Jean-Dominique; Laperche, Syria; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-04-01

    Sensitive and accurate hepatitis B virus (HBV) DNA detection and quantification are essential to diagnose HBV infection, establish the prognosis of HBV-related liver disease, and guide the decision to treat and monitor the virological response to antiviral treatment and the emergence of resistance. Currently available HBV DNA platforms and assays are generally designed for batching multiple specimens within an individual run and require at least one full day of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed, fully automated, one-step Aptima HBV Quant assay to accurately detect and quantify HBV DNA in a large series of patients infected with different HBV genotypes. The limit of detection of the assay was estimated to be 4.5 IU/ml. The specificity of the assay was 100%. Intra-assay and interassay coefficients of variation ranged from 0.29% to 5.07% and 4.90% to 6.85%, respectively. HBV DNA levels from patients infected with HBV genotypes A to F measured with the Aptima HBV Quant assay strongly correlated with those measured by two commercial real-time PCR comparators (Cobas AmpliPrep/Cobas TaqMan HBV test, version 2.0, and Abbott RealTi m e HBV test). In conclusion, the Aptima HBV Quant assay is sensitive, specific, and reproducible and accurately quantifies HBV DNA in plasma samples from patients with chronic HBV infections of all genotypes, including patients on antiviral treatment with nucleoside or nucleotide analogues. The Aptima HBV Quant assay can thus confidently be used to detect and quantify HBV DNA in both clinical trials with new anti-HBV drugs and clinical practice. Copyright © 2017 American Society for Microbiology.

  1. Measures to Improve Diagnostic Safety in Clinical Practice

    PubMed Central

    Singh, Hardeep; Graber, Mark L; Hofer, Timothy P

    2016-01-01

    Timely and accurate diagnosis is foundational to good clinical practice and an essential first step to achieving optimal patient outcomes. However, a recent Institute of Medicine report concluded that most of us will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through better measurement of diagnostic performance. The diagnostic process is a dynamic team-based activity that involves uncertainty, plays out over time, and requires effective communication and collaboration among multiple clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop measures to improve diagnostic performance could move forward at a time when the scientific foundation needed to inform measurement is still evolving. We highlight challenges and opportunities for developing potential measures of “diagnostic safety” related to clinical diagnostic errors and associated preventable diagnostic harm. In doing so, we propose a starter set of measurement concepts for initial consideration that seem reasonably related to diagnostic safety, and call for these to be studied and further refined. This would enable safe diagnosis to become an organizational priority and facilitate quality improvement. Health care systems should consider measurement and evaluation of diagnostic performance as essential to timely and accurate diagnosis and to the reduction of preventable diagnostic harm. PMID:27768655

  2. Upfront dilution of ferritin samples to reduce hook effect, improve turnaround time and reduce costs.

    PubMed

    Wu, Shu Juan; Hayden, Joshua A

    2018-02-15

    Sandwich immunoassays offer advantages in the clinical laboratory but can yield erroneously low results due to hook (prozone) effect, especially with analytes whose concentrations span several orders of magnitude such as ferritin. This study investigated a new approach to reduce the likelihood of hook effect in ferritin immunoassays by performing upfront, five-fold dilutions of all samples for ferritin analysis. The impact of this change on turnaround time and costs were also investigated. Ferritin concentrations were analysed in routine clinical practice with and without upfront dilutions on Siemens Centaur® XP (Siemens Healthineers, Erlang, Germany) immunoanalysers. In addition, one month of baseline data (1026 results) were collected prior to implementing upfront dilutions and one month of data (1033 results) were collected after implementation. Without upfront dilutions, hook effect was observed in samples with ferritin concentrations as low as 86,028 µg/L. With upfront dilutions, samples with ferritin concentrations as high as 126,050 µg/L yielded values greater than the measurement interval and would have been diluted until an accurate value was obtained. The implementation of upfront dilution of ferritin samples led to a decrease in turnaround time from a median of 2 hours and 3 minutes to 1 hour and 18 minutes (P = 0.002). Implementation of upfront dilutions of all ferritin samples reduced the possibility of hook effect, improved turnaround time and saved the cost of performing additional dilutions.

  3. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  4. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  5. Using quality improvement methods to reduce clear fluid fasting times in children on a preoperative ward.

    PubMed

    Newton, Richard J G; Stuart, Grant M; Willdridge, Daniel J; Thomas, Mark

    2017-08-01

    We applied quality improvement (QI) methodology to identify the different aspects of why children fasted for prolonged periods in our institution. Our aim was for 75% of all children to be fasted for clear fluid for less than 4 hours. Prolonged fasting in children can increase thirst and irritability and have adverse effects on haemodynamic stability on induction. By reducing this, children may be less irritable, more comfortable and more physiologically stable, improving the preoperative experience for both children and carers. We conducted a QI project from January 2014 until August 2016 at a large tertiary pediatric teaching hospital. Baseline data and the magnitude of the problem were obtained from pilot studies. This allowed us to build a key driver diagram, a process map and conduct a failure mode and effects analysis. Using a framework of Plan-Do-Study-Act cycles our key interventions primarily focused on reducing confusion over procedure start times, giving parents accurate information, empowering staff and reducing variation by allowing children to drink on arrival (up to one hour) before surgery. Prior to this project, using the 6,4,2 fasting rule for solids, breast milk, and clear fluids, respectively, 19% of children were fasted for fluid for less than 4 hours, mean fluid fasting time was 6.3 hours (SD 4.48). At the conclusion 72% of patients received a drink within 4 hours, mean fluid fasting reduced to 3.1 hours (SD 2.33). The secondary measures of aspiration (4.14:10 000) and cancellations have not increased since starting this project. By using established QI methodology we reduced the mean fluid fasting time for day admissions at our hospital to 3.1 hours and increased the proportion of children fasting for less than 4 hours from 19% to 72%. © 2017 John Wiley & Sons Ltd.

  6. Enhancing diabetes management while teaching quality improvement methods.

    PubMed

    Sievers, Beth A; Negley, Kristin D F; Carlson, Marny L; Nelson, Joyce L; Pearson, Kristina K

    2014-01-01

    Six medical units realized that they were having issues with accurate timing of bedtime blood glucose measurement for their patients with diabetes. They decided to investigate the issues by using their current staff nurse committee structure. The clinical nurse specialists and nurse education specialists decided to address the issue by educating and engaging the staff in the define, measure, analyze, improve, control (DMAIC) framework process. They found that two issues needed to be improved, including timing of bedtime blood glucose measurement and snack administration and documentation. Several educational interventions were completed and resulted in improved timing of bedtime glucose measurement and bedtime snack documentation. The nurses understood the DMAIC process, and collaboration and cohesion among the medical units was enhanced. Copyright 2014, SLACK Incorporated.

  7. A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  8. A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2002-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.

  9. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  10. An All-Fragments Grammar for Simple and Accurate Parsing

    DTIC Science & Technology

    2012-03-21

    Tsujii. Probabilistic CFG with latent annotations. In Proceedings of ACL, 2005. Slav Petrov and Dan Klein. Improved Inference for Unlexicalized Parsing. In...Proceedings of NAACL-HLT, 2007. Slav Petrov and Dan Klein. Sparse Multi-Scale Grammars for Discriminative Latent Variable Parsing. In Proceedings of...EMNLP, 2008. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings

  11. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Process improvement by cycle time reduction through Lean Methodology

    NASA Astrophysics Data System (ADS)

    Siva, R.; patan, Mahamed naveed khan; lakshmi pavan kumar, Mane; Purusothaman, M.; pitchai, S. Antony; Jegathish, Y.

    2017-05-01

    In present world, every customer needs their products to get on time with good quality. Presently every industry is striving to satisfy their customer requirements. An aviation concern trying to accomplish continuous improvement in all its projects. In this project the maintenance service for the customer is analyzed. The maintenance part service is split up into four levels. Out of it, three levels are done in service shops and the fourth level falls under customer’s privilege to change the parts in their aircraft engines at their location. An enhancement for electronics initial provisioning (eIP) is done for fourth level. Customers request service shops to get their requirements through Recommended Spare Parts List (RSPL) by eIP. To complete this RSPL for one customer, it takes 61.5 hours as a cycle time which is very high. By mapping current state VSM and takt time, future state improvement can be done in order to reduce cycle time using Lean tools such as Poke-Yoke, Jidoka, 5S, Muda etc.,

  13. Genetically improved BarraCUDA.

    PubMed

    Langdon, W B; Lam, Brian Yee Hong

    2017-01-01

    BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.

  14. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    PubMed

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  15. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  16. Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation

    PubMed Central

    Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David

    2017-01-01

    The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751

  17. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  18. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  19. A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Stamer, Torsten; Inutsuka, Shu-ichiro

    2018-06-01

    We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.

  20. Electronic Timekeeping: North Dakota State University Improves Payroll Processing.

    ERIC Educational Resources Information Center

    Vetter, Ronald J.; And Others

    1993-01-01

    North Dakota State University has adopted automated timekeeping to improve the efficiency and effectiveness of payroll processing. The microcomputer-based system accurately records and computes employee time, tracks labor distribution, accommodates complex labor policies and company pay practices, provides automatic data processing and reporting,…

  1. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  2. Accurate pan-specific prediction of peptide-MHC class II binding affinity with improved binding core identification.

    PubMed

    Andreatta, Massimo; Karosiene, Edita; Rasmussen, Michael; Stryhn, Anette; Buus, Søren; Nielsen, Morten

    2015-11-01

    A key event in the generation of a cellular response against malicious organisms through the endocytic pathway is binding of peptidic antigens by major histocompatibility complex class II (MHC class II) molecules. The bound peptide is then presented on the cell surface where it can be recognized by T helper lymphocytes. NetMHCIIpan is a state-of-the-art method for the quantitative prediction of peptide binding to any human or mouse MHC class II molecule of known sequence. In this paper, we describe an updated version of the method with improved peptide binding register identification. Binding register prediction is concerned with determining the minimal core region of nine residues directly in contact with the MHC binding cleft, a crucial piece of information both for the identification and design of CD4(+) T cell antigens. When applied to a set of 51 crystal structures of peptide-MHC complexes with known binding registers, the new method NetMHCIIpan-3.1 significantly outperformed the earlier 3.0 version. We illustrate the impact of accurate binding core identification for the interpretation of T cell cross-reactivity using tetramer double staining with a CMV epitope and its variants mapped to the epitope binding core. NetMHCIIpan is publicly available at http://www.cbs.dtu.dk/services/NetMHCIIpan-3.1 .

  3. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    PubMed

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  4. Second-order accurate nonoscillatory schemes for scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  5. User-initialized active contour segmentation and golden-angle real-time cardiovascular magnetic resonance enable accurate assessment of LV function in patients with sinus rhythm and arrhythmias.

    PubMed

    Contijoch, Francisco; Witschey, Walter R T; Rogers, Kelly; Rears, Hannah; Hansen, Michael; Yushkevich, Paul; Gorman, Joseph; Gorman, Robert C; Han, Yuchi

    2015-05-21

    Data obtained during arrhythmia is retained in real-time cardiovascular magnetic resonance (rt-CMR), but there is limited and inconsistent evidence to show that rt-CMR can accurately assess beat-to-beat variation in left ventricular (LV) function or during an arrhythmia. Multi-slice, short axis cine and real-time golden-angle radial CMR data was collected in 22 clinical patients (18 in sinus rhythm and 4 patients with arrhythmia). A user-initialized active contour segmentation (ACS) software was validated via comparison to manual segmentation on clinically accepted software. For each image in the 2D acquisitions, slice volume was calculated and global LV volumes were estimated via summation across the LV using multiple slices. Real-time imaging data was reconstructed using different image exposure times and frame rates to evaluate the effect of temporal resolution on measured function in each slice via ACS. Finally, global volumetric function of ectopic and non-ectopic beats was measured using ACS in patients with arrhythmias. ACS provides global LV volume measurements that are not significantly different from manual quantification of retrospectively gated cine images in sinus rhythm patients. With an exposure time of 95.2 ms and a frame rate of > 89 frames per second, golden-angle real-time imaging accurately captures hemodynamic function over a range of patient heart rates. In four patients with frequent ectopic contractions, initial quantification of the impact of ectopic beats on hemodynamic function was demonstrated. User-initialized active contours and golden-angle real-time radial CMR can be used to determine time-varying LV function in patients. These methods will be very useful for the assessment of LV function in patients with frequent arrhythmias.

  6. Annual land cover change mapping using MODIS time series to improve emissions inventories.

    NASA Astrophysics Data System (ADS)

    López Saldaña, G.; Quaife, T. L.; Clifford, D.

    2014-12-01

    Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A prototype land cover product was created for 2006 to 2008. Several machine learning classifiers were tested as well as different sets of input features going from the BRDF parameters to spectral Albedo. We will present the results of the time series development and the first exercises when creating the prototype land cover product.

  7. Stable and Spectrally Accurate Schemes for the Navier-Stokes Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jia, Jun; Liu, Jie

    2011-01-01

    In this paper, we present an accurate, efficient and stable numerical method for the incompressible Navier-Stokes equations (NSEs). The method is based on (1) an equivalent pressure Poisson equation formulation of the NSE with proper pressure boundary conditions, which facilitates the design of high-order and stable numerical methods, and (2) the Krylov deferred correction (KDC) accelerated method of lines transpose (mbox MoL{sup T}), which is very stable, efficient, and of arbitrary order in time. Numerical tests with known exact solutions in three dimensions show that the new method is spectrally accurate in time, and a numerical order of convergence 9more » was observed. Two-dimensional computational results of flow past a cylinder and flow in a bifurcated tube are also reported.« less

  8. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  9. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    PubMed Central

    Hassan, Shirin E.

    2012-01-01

    Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593

  10. Stroboscopic Training Enhances Anticipatory Timing.

    PubMed

    Smith, Trevor Q; Mitroff, Stephen R

    The dynamic aspects of sports often place heavy demands on visual processing. As such, an important goal for sports training should be to enhance visual abilities. Recent research has suggested that training in a stroboscopic environment, where visual experiences alternate between visible and obscured, may provide a means of improving attentional and visual abilities. The current study explored whether stroboscopic training could impact anticipatory timing - the ability to predict where a moving stimulus will be at a specific point in time. Anticipatory timing is a critical skill for both sports and non-sports activities, and thus finding training improvements could have broad impacts. Participants completed a pre-training assessment that used a Bassin Anticipation Timer to measure their abilities to accurately predict the timing of a moving visual stimulus. Immediately after this initial assessment, the participants completed training trials, but in one of two conditions. Those in the Control condition proceeded as before with no change. Those in the Strobe condition completed the training trials while wearing specialized eyewear that had lenses that alternated between transparent and opaque (rate of 100ms visible to 150ms opaque). Post-training assessments were administered immediately after training, 10-minutes after training, and 10-days after training. Compared to the Control group, the Strobe group was significantly more accurate immediately after training, was more likely to respond early than to respond late immediately after training and 10 minutes later, and was more consistent in their timing estimates immediately after training and 10 minutes later.

  11. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  12. Determining the best phenological state for accurate mapping of Phragmites australis in wetlands using time series multispectral satellite data

    NASA Astrophysics Data System (ADS)

    Rupasinghe, P. A.; Markle, C. E.; Marcaccio, J. V.; Chow-Fraser, P.

    2017-12-01

    Phragmites australis (European common reed), is a relatively recent invader of wetlands and beaches in Ontario. It can establish large homogenous stands within wetlands and disperse widely throughout the landscape by wind and vehicular traffic. A first step in managing this invasive species includes accurate mapping and quantification of its distribution. This is challenging because Phragimtes is distributed in a large spatial extent, which makes the mapping more costly and time consuming. Here, we used freely available multispectral satellite images taken monthly (cloud free images as available) for the calendar year to determine the optimum phenological state of Phragmites that would allow it to be accurately identified using remote sensing data. We analyzed time series, Landsat-8 OLI and Sentinel-2 images for Big Creek Wildlife Area, ON using image classification (Support Vector Machines), Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). We used field sampling data and high resolution image collected using Unmanned Aerial Vehicle (UAV; 8 cm spatial resolution) as training data and for the validation of the classified images. The accuracy for all land cover classes and for Phragmites alone were low at both the start and end of the calendar year, but reached overall accuracy >85% by mid to late summer. The highest classification accuracies for Landsat-8 OLI were associated with late July and early August imagery. We observed similar trends using the Sentinel-2 images, with higher overall accuracy for all land cover classes and for Phragmites alone from late July to late September. During this period, we found the greatest difference between Phragmites and Typha, commonly confused classes, with respect to near-infrared and shortwave infrared reflectance. Therefore, the unique spectral signature of Phragmites can be attributed to both the level of greenness and factors related to water content in the leaves during late

  13. Caffeinated nitric oxide-releasing lozenge improves cycling time trial performance.

    PubMed

    Lee, J; Kim, H T; Solares, G J; Kim, K; Ding, Z; Ivy, J L

    2015-02-01

    Boosting nitric oxide production during exercise by various means has been found to improve exercise performance. We investigated the effects of a nitric oxide releasing lozenge with added caffeine (70 mg) on oxygen consumption during steady-state exercise and cycling time trial performance using a double-blinded randomized, crossover experimental design. 15 moderately trained cyclists (7 females and 8 males) were randomly assigned to ingest the caffeinated nitric oxide lozenge or placebo 5 min before exercise. Oxygen consumption and blood lactate were assessed at rest and at 50%, 65% and 75% maximal oxygen consumption. Exercise performance was assessed by time to complete a simulated 20.15 km cycling time-trial course. No significant treatment effects for oxygen consumption or blood lactate at rest or during steady-state exercise were observed. However, time-trial performance was improved by 2.1% (p<0.01) when participants consumed the nitric oxide lozenge (2,424±69 s) compared to placebo (2,476±78 s) and without a significant difference in rating of perceived exertion. These results suggest that acute supplementation with a caffeinated nitric oxide releasing lozenge may be a practical and effective means of improving aerobic exercise performance. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Improving prehospital trauma care in Rwanda through continuous quality improvement: an interrupted time series analysis.

    PubMed

    Scott, John W; Nyinawankusi, Jeanne D'Arc; Enumah, Samuel; Maine, Rebecca; Uwitonze, Eric; Hu, Yihan; Kabagema, Ignace; Byiringiro, Jean Claude; Riviello, Robert; Jayaraman, Sudha

    2017-07-01

    Injury is a major cause of premature death and disability in East Africa, and high-quality pre-hospital care is essential for optimal trauma outcomes. The Rwandan pre-hospital emergency care service (SAMU) uses an electronic database to evaluate and optimize pre-hospital care through a continuous quality improvement programme (CQIP), beginning March 2014. The SAMU database was used to assess pre-hospital quality metrics including supplementary oxygen for hypoxia (O2), intravenous fluids for hypotension (IVF), cervical collar placement for head injuries (c-collar), and either splinting (splint) or administration of pain medications (pain) for long bone fractures. Targets of >90% were set for each metric and daily team meetings and monthly feedback sessions were implemented to address opportunities for improvement. These five pre-hospital quality metrics were assessed monthly before and after implementation of the CQIP. Met and unmet needs for O2, IVF, and c-collar were combined into a summative monthly SAMU Trauma Quality Scores (STQ score). An interrupted time series linear regression model compared the STQ score during 14 months before the CQIP implementation to the first 14 months after. During the 29-month study period 3,822 patients met study criteria. 1,028 patients needed one or more of the five studied interventions during the study period. All five endpoints had a significant increase between the pre-CQI and post-CQI periods (p<0.05 for all), and all five achieved a post-CQI average of at least 90% completion. The monthly composite STQ scores ranged from 76.5 to 97.9 pre-CQI, but tightened to 86.1-98.7 during the post-CQI period. Interrupted time series analysis of the STQ score showed that CQI programme led to both an immediate improvement of +6.1% (p=0.017) and sustained monthly improvements in care delivery-improving at a rate of 0.7% per month (p=0.028). The SAMU experience demonstrates the utility of a responsive, data-driven quality improvement

  15. Turning education into action: Impact of a collective social education approach to improve nurses' ability to recognize and accurately assess delirium in hospitalized older patients.

    PubMed

    Travers, Catherine; Henderson, Amanda; Graham, Fred; Beattie, Elizabeth

    2018-03-01

    Although cognitive impairment including dementia and delirium is common in older hospital patients, it is not well recognized or managed by hospital staff, potentially resulting in adverse events. This paper describes, and reports on the impact of a collective social education approach to improving both nurses' knowledge of, and screening for delirium. Thirty-four experienced nurses from six hospital wards, became Cognition Champions (CogChamps) to lead their wards in a collective social education process about cognitive impairment and the assessment of delirium. At the outset, the CogChamps were provided with comprehensive education about dementia and delirium from a multidisciplinary team of clinicians. Their knowledge was assessed to ascertain they had the requisite understanding to engage in education as a collective social process, namely, with each other and their local teams. Following this, they developed ward specific Action Plans in collaboration with their teams aimed at educating and evaluating ward nurses' ability to accurately assess and care for patients for delirium. The plans were implemented over five months. The broader nursing teams' knowledge was assessed, together with their ability to accurately assess patients for delirium. Each ward implemented their Action Plan to varying degrees and key achievements included the education of a majority of ward nurses about delirium and the certification of the majority as competent to assess patients for delirium using the Confusion Assessment Method. Two wards collected pre-and post-audit data that demonstrated a substantial improvement in delirium screening rates. The education process led by CogChamps and supported by educators and clinical experts provides an example of successfully educating nurses about delirium and improving screening rates of patients for delirium. ACTRN 12617000563369. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    NASA Astrophysics Data System (ADS)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  17. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  18. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  19. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  20. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  1. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  2. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  3. High-precision two-way optic-fiber time transfer using an improved time code.

    PubMed

    Wu, Guiling; Hu, Liang; Zhang, Hao; Chen, Jianping

    2014-11-01

    We present a novel high-precision two-way optic-fiber time transfer scheme. The Inter-Range Instrumentation Group (IRIG-B) time code is modified by increasing bit rate and defining new fields. The modified time code can be transmitted directly using commercial optical transceivers and is able to efficiently suppress the effect of the Rayleigh backscattering in the optical fiber. A dedicated codec (encoder and decoder) with low delay fluctuation is developed. The synchronization issue is addressed by adopting a mask technique and combinational logic circuit. Its delay fluctuation is less than 27 ps in terms of the standard deviation. The two-way optic-fiber time transfer using the improved codec scheme is verified experimentally over 2 m to100 km fiber links. The results show that the stability over 100 km fiber link is always less than 35 ps with the minimum value of about 2 ps at the averaging time around 1000 s. The uncertainty of time difference induced by the chromatic dispersion over 100 km is less than 22 ps.

  4. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    NASA Astrophysics Data System (ADS)

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  5. Robust High-Resolution Cloth Using Parallelism, History-Based Collisions and Accurate Friction

    PubMed Central

    Selle, Andrew; Su, Jonathan; Irving, Geoffrey; Fedkiw, Ronald

    2015-01-01

    In this paper we simulate high resolution cloth consisting of up to 2 million triangles which allows us to achieve highly detailed folds and wrinkles. Since the level of detail is also influenced by object collision and self collision, we propose a more accurate model for cloth-object friction. We also propose a robust history-based repulsion/collision framework where repulsions are treated accurately and efficiently on a per time step basis. Distributed memory parallelism is used for both time evolution and collisions and we specifically address Gauss-Seidel ordering of repulsion/collision response. This algorithm is demonstrated by several high-resolution and high-fidelity simulations. PMID:19147895

  6. Accurate lithography simulation model based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  7. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  8. Improved test of time dilation in special relativity.

    PubMed

    Saathoff, G; Karpuk, S; Eisenbarth, U; Huber, G; Krohn, S; Muñoz Horta, R; Reinhardt, S; Schwalm, D; Wolf, A; Gwinner, G

    2003-11-07

    An improved test of time dilation in special relativity has been performed using laser spectroscopy on fast ions at the heavy-ion storage-ring TSR in Heidelberg. The Doppler-shifted frequencies of a two-level transition in 7Li+ ions at v=0.064c have been measured in the forward and backward direction to an accuracy of Deltanu/nu=1 x 10(-9) using collinear saturation spectroscopy. The result confirms the relativistic Doppler formula and sets a new limit of 2.2 x 10(-7) for deviations from the time dilation factor gamma(SR)=(1-v2/c2)(-1/2).

  9. Improving the depth sensitivity of time-resolved measurements by extracting the distribution of times-of-flight

    PubMed Central

    Diop, Mamadou; St. Lawrence, Keith

    2013-01-01

    Time-resolved (TR) techniques provide a means of discriminating photons based on their time-of-flight. Since early arriving photons have a lower probability of probing deeper tissue than photons with long time-of-flight, time-windowing has been suggested as a method for improving depth sensitivity. However, TR measurements also contain instrument contributions (instrument-response-function, IRF), which cause temporal broadening of the measured temporal point-spread function (TPSF) compared to the true distribution of times-of-flight (DTOF). The purpose of this study was to investigate the influence of the IRF on the depth sensitivity of TR measurements. TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved using a stable algorithm to recover the DTOFs. The microscopic Beer-Lambert law was applied to the TPSFs and DTOFs to obtain depth-resolved absorption changes. In contrast to the DTOF, the latest part of the TPSF was not the most sensitive to absorption changes in the lower layer, which was confirmed by computer simulations. The improved depth sensitivity of the DTOF was illustrated in a pig model of the adult human head. Specifically, it was shown that dynamic absorption changes obtained from the late part of the DTOFs recovered from TPSFs acquired by probes positioned on the scalp were similar to absorption changes measured directly on the brain. These results collectively demonstrate that this method improves the depth sensitivity of TR measurements by removing the effects of the IRF. PMID:23504445

  10. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    PubMed

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  11. Combining Instruction Prefetching with Partial Cache Locking to Improve WCET in Real-Time Systems

    PubMed Central

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking. PMID:24386133

  12. Real-time, two-way interaction during ST-segment elevation myocardial infarction management improves door-to-balloon times.

    PubMed

    Sardi, Gabriel L; Loh, Joshua P; Torguson, Rebecca; Satler, Lowell F; Waksman, Ron

    2014-01-01

    The study aimed to determine if utilization of the CodeHeart application (CHap) reduces door-to-balloon (DTB) times of ST-segment elevation myocardial infarction (STEMI) patients. A pre-hospital electrocardiogram improves the management of patients with STEMI. Current telecommunication systems do not permit real-time interaction with the initial care providers. Our institution developed a novel telecommunications system based on a software application that permits real-time, two-way video and voice interaction over a secured network. All STEMI system activations after implementation of the CHap were prospectively entered into a database. Consecutive CHap activations were compared to routine activations as controls, during the same time period. A total of 470 STEMI system activations occurred; CHap was used in 83 cases (17.7%). DTB time was reduced by the use of CHap when compared to controls (CHap 103 minutes, 95% CI [87.0-118.3] vs. standard 149 minutes, 95% CI [134.0-164.8], p<0.0001), as was first call-to-balloon time (CHap 70 minutes, 95% CI [60.8-79.5] vs. standard 92 minutes, 95% CI [85.8-98.9], p=0.0002). The percentage of 'true positive' catheterization laboratory activations was nominally higher with the use of CHap, although this did not reach statistical significance [CHap 47/83 (56.6%) vs. routine 178/387 (45.9%), p=0.103]. The implementation of a two-way telecommunications system allowing real-time interactions between interventional cardiologists and referring practitioners improves overall DTB time. In addition, it has the potential to decrease the frequency of false activations, thereby improving the cost efficiency of a network's STEMI system. Copyright © 2014. Published by Elsevier Inc.

  13. Predictors of cancer-related pain improvement over time.

    PubMed

    Wang, Hsiao-Lan; Kroenke, Kurt; Wu, Jingwei; Tu, Wanzhu; Theobald, Dale; Rawl, Susan M

    2012-01-01

    To determine the predictors of pain improvement among patients being treated for cancer-related pain over 12 months. A secondary analysis of the telephone care Indiana Cancer Pain and Depression trial was performed. Patients (n = 274) were interviewed at baseline and after 1, 3, 6, and 12 months. Pain improvement outcomes included both a continuous measure (Brief Pain Inventory score) and a categorical measure (pain improved versus pain not improved). Predictor variables included change in depression, age, sex, race, marital status, socioeconomic disadvantage, medical comorbidity, type of cancer, and phase of cancer. Multivariable repeated measures were conducted, adjusting for intervention group assignment, baseline pain severity, and time in months since baseline assessment. Factors significantly predicting both continuous and categorical pain improvement included participating in the intervention group (β = -0.92, p < .001, odds ratio [OR] = 2.53, 95% confidence interval [CI] = 1.65-3.89), greater improvement in depression (β = -0.31, p = .003, OR = 1.84, 95% CI = 1.35-2.51), higher socioeconomic status (Socioeconomic Disadvantage index; β = 0.25, p = .034; OR = 0.73, 95% CI = 0.56-0.94), and fewer comorbid conditions (β = 0.20, p = .002; OR = 0.84, 95% CI = 0.73-0.96). Patients with more severe pain at baseline or with recurrent or progressive cancer were less likely to experience continuous or categorical pain improvement, respectively. Effective management of depression and comorbid conditions along with improvement of social services could be critical components of a comprehensive pain management. Patients with more severe pain or with recurrent or progressive cancers may require closer monitoring and adequate treatment of pain. Trial Registration clinicaltrials.gov Identifier: NCT00313573.

  14. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  15. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  16. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  17. Improved Surface Parameter Retrievals using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John

    2008-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology, longwave C02 channel observations in the spectral region 700 cm(exp -1) to 750 cm(exp -1) are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm(exp -1) 2395 cm(exp -1) are used for temperature sounding purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface skin temperature is also derived using only shortwave window channels. This produces improved surface parameters, both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer temperatures and retrieved total O3 burden.

  18. Accurate age estimation in small-scale societies

    PubMed Central

    Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.

    2017-01-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282

  19. Accurate age estimation in small-scale societies.

    PubMed

    Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G

    2017-08-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.

  20. Time-Accurate Computational Fluid Dynamics Simulation of a Pair of Moving Solid Rocket Boosters

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Williams, Brandon R.

    2011-01-01

    Since the Columbia accident, the threat to the Shuttle launch vehicle from debris during the liftoff timeframe has been assessed by the Liftoff Debris Team at NASA/MSFC. In addition to engineering methods of analysis, CFD-generated flow fields during the liftoff timeframe have been used in conjunction with 3-DOF debris transport methods to predict the motion of liftoff debris. Early models made use of a quasi-steady flow field approximation with the vehicle positioned at a fixed location relative to the ground; however, a moving overset mesh capability has recently been developed for the Loci/CHEM CFD software which enables higher-fidelity simulation of the Shuttle transient plume startup and liftoff environment. The present work details the simulation of the launch pad and mobile launch platform (MLP) with truncated solid rocket boosters (SRBs) moving in a prescribed liftoff trajectory derived from Shuttle flight measurements. Using Loci/CHEM, time-accurate RANS and hybrid RANS/LES simulations were performed for the timeframe T0+0 to T0+3.5 seconds, which consists of SRB startup to a vehicle altitude of approximately 90 feet above the MLP. Analysis of the transient flowfield focuses on the evolution of the SRB plumes in the MLP plume holes and the flame trench, impingement on the flame deflector, and especially impingment on the MLP deck resulting in upward flow which is a transport mechanism for debris. The results show excellent qualitative agreement with the visual record from past Shuttle flights, and comparisons to pressure measurements in the flame trench and on the MLP provide confidence in these simulation capabilities.

  1. Review of current GPS methodologies for producing accurate time series and their error sources

    NASA Astrophysics Data System (ADS)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  2. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  3. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  4. Improved real-time dynamics from imaginary frequency lattice simulations

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  5. Judging the Probability of Hypotheses Versus the Impact of Evidence: Which Form of Inductive Inference Is More Accurate and Time-Consistent?

    PubMed

    Tentori, Katya; Chater, Nick; Crupi, Vincenzo

    2016-04-01

    Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.

  6. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  7. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  8. Simple tunnel diode circuit for accurate zero crossing timing

    NASA Technical Reports Server (NTRS)

    Metz, A. J.

    1969-01-01

    Tunnel diode circuit, capable of timing the zero crossing point of bipolar pulses, provides effective design for a fast crossing detector. It combines a nonlinear load line with the diode to detect the zero crossing of a wide range of input waveshapes.

  9. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  10. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Zhe; Rey, Daniel; Ye, Jingxin

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  11. Improving Interstellar Medium Mitigation in Millisecond PulsarTiming Models for Gravitational Wave Detection Sensitivity

    NASA Astrophysics Data System (ADS)

    Wilson, Robert C.

    2018-01-01

    This study aims to increase the sensitivity of pulsar timing arrays (PTAs) used by astronomers ofthe North American Nanohertz Observatory for Gravitational Waves (NANOGrav) to detectgravitational waves (GWs). Millisecond pulsars with many epochs of observations will be used todetermine if dispersive, frequency-dependent pulse time-of-arrival (TOA) delays caused by theinterstellar medium (ISM) can be more accurately predicted over numerous frequency channels.This project will contribute to the ongoing work to detect low-frequency GWs using PTAs. Dataused for this study will be from both the 110m telescope at the Green Bank Observatory in WestVirginia and the 305m telescope at the Arecibo Observatory in Puerto Rico.

  12. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  14. Monitoring universal protocol compliance through real-time clandestine observation by medical students results in performance improvement.

    PubMed

    Logan, Catherine A; Cressey, Brienne D; Wu, Roger Y; Janicki, Adam J; Chen, Cyril X; Bolourchi, Meena L; Hodnett, Jessica L; Stratigis, John D; Mackey, William C; Fairchild, David G

    2012-01-01

    To measure universal protocol compliance through real-time, clandestine observation by medical students compared with chart audit reviews, and to enable medical students the opportunity to become conscious of the importance of medical errors and safety initiatives. With endorsement from Tufts Medical Center's (TMC's) Chief Medical Officer and Surgeon-in-Chief, 8 medical students performed clandestine observation audits of 98 cases from April to August 2009. A compliance checklist was based on TMC's presurgical checklist. Our initial results led to interventions to improve our universal protocol procedures, including modifications to the operating room white board and presurgical checklist, and specific feedback to surgical departments. One year later, 6 medical students performed observations of 100 cases from June to August 2010. Tufts Medical Center, Boston, Massachusetts, which is an academic medical center and the principal teaching hospital for Tufts University School of Medicine. An operating room coordinator placed the medical students into 1 of our 25 operating rooms with students entering under the premise of observing the anesthesiologist for clinical education. The observations were performed Monday to Friday between 7 am and 4 pm. Although observations were not randomized, no single service or type of surgery was targeted for observation. A broad range of departments was observed. In 8.2% of cases, the surgical site was unmarked. A Time Out occurred in 89.7% of cases. The entire surgical team was attentive during the time out in 82% of cases. The presurgical checklist was incomplete before incision in 13 cases. Images were displayed in 82% of cases. The operating room "white board" was filled out completely in 49% of cases. Team introductions occurred in 13 cases. One year later, compliance increased in all Universal Protocol dimensions. Direct, real-time observation by medical students provides an accurate and granular assessment of compliance with

  15. Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold

    2017-03-01

    Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.

  16. Real-time feedback to improve gait in children with cerebral palsy.

    PubMed

    van Gelder, Linda; Booth, Adam T C; van de Port, Ingrid; Buizer, Annemieke I; Harlaar, Jaap; van der Krogt, Marjolein M

    2017-02-01

    Real-time feedback may be useful for enhancing information gained from clinical gait analysis of children with cerebral palsy (CP). It may also be effective in functional gait training, however, it is not known if children with CP can adapt gait in response to real-time feedback of kinematic parameters. Sixteen children with cerebral palsy (age 6-16; GMFCS I-III), walking with a flexed-knee gait pattern, walked on an instrumented treadmill with virtual reality in three conditions: regular walking without feedback (NF), feedback on hip angle (FH) and feedback on knee angle (FK). Clinically relevant gait parameters were calculated and the gait profile score (GPS) was used as a measure of overall gait changes between conditions. All children, except one, were able to improve hip and/or knee extension during gait in response to feedback, with nine achieving a clinically relevant improvement. Peak hip extension improved significantly by 5.1±5.9° (NF: 8.9±12.8°, FH: 3.8±10.4°, p=0.01). Peak knee extension improved significantly by 7.7±7.1° (NF: 22.2±12.0°, FK: 14.5±12.7°, p<0.01). GPS did not change between conditions due to increased deviations in other gait parameters. Responders to feedback were shown to have worse initial gait as measured by GPS (p=0.005) and functional selectivity score (p=0.049). In conclusion, ambulatory children with CP show adaptability in gait and are able to respond to real-time feedback, resulting in significant and clinically relevant improvements in peak hip and knee extension. These findings show the potential of real-time feedback as a tool for functional gait training and advanced gait analysis in CP. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Engaging Frontline Leaders and Staff in Real-Time Improvement.

    PubMed

    Phillips, Jennifer; Hebish, Linda J; Mann, Sharon; Ching, Joan M; Blackmore, C Craig

    2016-04-01

    The relationship of staff satisfaction and engagement to organizational success, along with the integral influence of frontline managers on this dimension, is well established in health care and other industries. To specifically address staff engagement, Virginia Mason Medical Center, an integrated, single-hospital health system, developed an approach that involved leaders, through the daily use of standard work for leaders, as well as staff, through a Lean-inspired staff idea system. Kaizen Promotion Office (KPO) staff members established three guiding principles: (1) Staff engagement begins with leader engagement; (2) Integrate daily improve- ment (kaizen) as a habitual way of life not as an add-on; and (3) Create an environment in which staff feel psycho- logically safe and valued. Two design elements--Standard Work for Leaders (SWL) and Everyday Lean Ideas (ELIs) were implemented. For the emergency department (ED), an early adopter of the staff engagement work, the challenge was to apply the guiding principles to improve staff engagement while improving quality and patient and staff satisfaction, even as patient volumes were increasing. Daily huddles for the KPO staff members and weekly leader rounds are used to elicit staff ideas and foster ELIs in real time. Overall progress to date has been tracked in terms of staff satisfaction surveys, voluntary staff turnover, adoption of SWL, and testing and implementation of staff ideas. For example, voluntary turnover of ED staff decreased from 14.6% in 2011 to 7.5% in 2012, and 2.0% in 2013. Organizationwide, at least 800 staff ideas are in motion at any given time, with finished ones posted in an idea supermarket website. A leadership and staff engagement approach that focuses on SWL and on capturing staff ideas for daily problem solving and improvement can contribute to organization success and improve the quality of health care delivery.

  18. Comparing Resident Self-Report to Chart Audits for Quality Improvement Projects: Accurate Reflection or Cherry-Picking?

    PubMed Central

    Kuperman, Ethan F.; Tobin, Kristen; Kraschnewski, Jennifer L.

    2014-01-01

    Background Resident engagement in quality improvement is a requirement for graduate medical education, but the optimal means of instruction and evaluation of resident progress remain unknown. Objective To determine the accuracy of self-reported chart audits in measuring resident adherence to primary care clinical practice guidelines. Methods During the 2010–2011 academic year, second- and third-year internal medicine residents at a single, university hospital–based program performed chart audits on 10 patients from their primary care clinic to determine adherence to 16 US Preventive Services Task Force primary care guidelines. We compared residents' responses to independent audits of randomly selected patient charts by a single external reviewer. Results Self-reported data were collected by 18 second-year and 15 third-year residents for 330 patients. Independently, 70 patient charts were randomly selected for review by an external auditor. Overall guideline compliance was significantly higher on self-reported audits compared to external audits (82% versus 68%, P < .001). Of 16 guidelines, external audits found significantly lower rates of adherence for 5 (tetanus vaccination, osteoporosis screening, colon cancer screening, cholesterol screening, and obesity screening). Chlamydia screening was more common in audited charts than in self-reported data. Although third-year residents self-reported higher guideline adherence than second-year residents (86% versus 78%, P < .001), external audits for third-year residents found lower overall adherence (64% versus 72%, P  =  .040). Conclusions Residents' self-reported chart audits may significantly overestimate guideline adherence. Increased supervision and independent review appear necessary to accurately evaluate resident performance. PMID:26140117

  19. Comparing Resident Self-Report to Chart Audits for Quality Improvement Projects: Accurate Reflection or Cherry-Picking?

    PubMed

    Kuperman, Ethan F; Tobin, Kristen; Kraschnewski, Jennifer L

    2014-12-01

    Resident engagement in quality improvement is a requirement for graduate medical education, but the optimal means of instruction and evaluation of resident progress remain unknown. To determine the accuracy of self-reported chart audits in measuring resident adherence to primary care clinical practice guidelines. During the 2010-2011 academic year, second- and third-year internal medicine residents at a single, university hospital-based program performed chart audits on 10 patients from their primary care clinic to determine adherence to 16 US Preventive Services Task Force primary care guidelines. We compared residents' responses to independent audits of randomly selected patient charts by a single external reviewer. Self-reported data were collected by 18 second-year and 15 third-year residents for 330 patients. Independently, 70 patient charts were randomly selected for review by an external auditor. Overall guideline compliance was significantly higher on self-reported audits compared to external audits (82% versus 68%, P < .001). Of 16 guidelines, external audits found significantly lower rates of adherence for 5 (tetanus vaccination, osteoporosis screening, colon cancer screening, cholesterol screening, and obesity screening). Chlamydia screening was more common in audited charts than in self-reported data. Although third-year residents self-reported higher guideline adherence than second-year residents (86% versus 78%, P < .001), external audits for third-year residents found lower overall adherence (64% versus 72%, P  =  .040). Residents' self-reported chart audits may significantly overestimate guideline adherence. Increased supervision and independent review appear necessary to accurately evaluate resident performance.

  20. Improvement of sustainability of irrigation in olive by the accurate management of regulated deficit irrigation

    NASA Astrophysics Data System (ADS)

    Memmi, Houssem; Moreno, Marta M.; Gijón, M. Carmen; Pérez-López, David

    2015-04-01

    Regulated Deficit Irrigation (RDI) is a useful tool to balance the improvement of productivity and water saving. This methodology is based in keeping the maximum yield with deficit irrigation. The key consists in setting water deficit during a non-sensitive phenological period. In olive, this phenological period is pit hardening, although, the accurate delimitation of the end of this period is nowadays under researching. Another interesting point in this methodology is how deep can be the water stress during the non-sensitive period. In this assay, three treatments were used in 2012 and 2013. A control treatment (T0), irrigated following FAO methodology, without water stress during the whole season and two RDI treatments in which water stress was avoided only during stage I and III of fruit growth. During stage II, widely considered as pit hardening, irrigation was ceased until trees reach the stated water stress threshold. Water status was monitored by means of stem water potential (ψs) measurements. When ψs value reached -2 MPa in T1 treatment, trees were irrigated but with a low amount of water with the aim of keeping this water status for the whole stage II. The same methodology was used for T2 treatment, but with a threshold of -3 MPa. Water status was also controlled by leaf conductance measurements. Fruit size and yield were determined at the end of each season. The statistically design was a randomized complete blocks with four repetitions. The irrigation amount in T1 and T2 was 50% and 65% less than T0 at the end of the study. There were no significant differences among treatments in terms of yield in 2012 (year off) and 2013 (year on).

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  6. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  7. Coupled optical and thermal detailed simulations for the accurate evaluation and performance improvement of molten salts solar towers

    NASA Astrophysics Data System (ADS)

    García-Barberena, Javier; Mutuberria, Amaia; Palacin, Luis G.; Sanz, Javier L.; Pereira, Daniel; Bernardos, Ana; Sanchez, Marcelino; Rocha, Alberto R.

    2017-06-01

    The National Renewable Energy Centre of Spain, CENER, and the Technology & Innovation area of ACS Cobra, as a result of their long term expertise in the CSP field, have developed a high-quality and high level of detail optical and thermal simulation software for the accurate evaluation of Molten Salts Solar Towers. The main purpose of this software is to make a step forward in the state-of-the-art of the Solar Towers simulation programs. Generally, these programs deal with the most critical systems of such plants, i.e. the solar field and the receiver, on an independent basis. Therefore, these programs typically neglect relevant aspects in the operation of the plant as heliostat aiming strategies, solar flux shapes onto the receiver, material physical and operational limitations, transient processes as preheating and secure cloud passing operating modes, and more. The modelling approach implemented in the developed program consists on effectively coupling detailed optical simulations of the heliostat field with also detailed and full-transient thermal simulations of the molten salts tube-based external receiver. The optical model is based on an accurate Monte Carlo ray-tracing method which solves the complete solar field by simulating each of the heliostats at once according to their specific layout in the field. In the thermal side, the tube-based cylindrical external receiver of a Molten Salts Solar Tower is modelled assuming one representative tube per panel, and implementing the specific connection layout of the panels as well as the internal receiver pipes. Each tube is longitudinally discretized and the transient energy and mass balances in the temperature dependent molten salts and steel tube models are solved. For this, a one dimensional radial heat transfer model based is used. The thermal model is completed with a detailed control and operation strategy module, able to represent the appropriate operation of the plant. An integration framework has been

  8. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. I. Measurement of positive radical ions using porphyrin standard reference materials.

    PubMed

    Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth

    2010-06-15

    A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.

  9. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  10. Improved bacterial identification directly from urine samples with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Kitagawa, Koichi; Shigemura, Katsumi; Onuma, Ken-Ichiro; Nishida, Masako; Fujiwara, Mayu; Kobayashi, Saori; Yamasaki, Mika; Nakamura, Tatsuya; Yamamichi, Fukashi; Shirakawa, Toshiro; Tokimatsu, Issei; Fujisawa, Masato

    2018-03-01

    Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) contributes to rapid identification of pathogens in the clinic but has not yet performed especially well for Gram-positive cocci (GPC) causing complicated urinary tract infection (UTI). The goal of this study was to investigate the possible clinical use of MALDI-TOF MS as a rapid method for bacterial identification directly from urine in complicated UTI. MALDI-TOF MS was applied to urine samples gathered from 142 suspected complicated UTI patients in 2015-2017. We modified the standard procedure (Method 1) for sample preparation by adding an initial 10 minutes of ultrasonication followed by centrifugation at 500 g for 1 minutes to remove debris such as epithelial cells and leukocytes from the urine (Method 2). In 133 urine culture-positive bacteria, the rate of corresponded with urine culture in GPC by MALDI-TOF MS in urine with standard sample preparation (Method 1) was 16.7%, but the modified sample preparation (Method 2) significantly improved that rate to 52.2% (P=.045). Method 2 also improved the identification accuracy for Gram-negative rods (GNR) from 77.1% to 94.2% (P=.022). The modified Method 2 significantly improved the average MALDI score from 1.408±0.153 to 2.166±0.045 (P=.000) for GPC and slightly improved the score from 2.107±0.061 to 2.164±0.037 for GNR. The modified sample preparation for MALDI-TOF MS can improve identification accuracy for complicated UTI causative bacteria. This simple modification offers a rapid and accurate routine diagnosis for UTI, and may possibly be a substitute for urine cultures. © 2017 Wiley Periodicals, Inc.

  11. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  12. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  13. Physical Oceanographic Real-Time System (PORTS) (Invited)

    NASA Astrophysics Data System (ADS)

    Wright, D.

    2013-12-01

    The 1999 Assessment of U.S. Marine Transportation System report to Congress noted that the greatest safety concern voiced by the maritime community was the availability of timely, accurate, and reliable navigation information, including real time environment data. Real time oceanographic and meteorological data, along with other navigation tools, gives the mariner a good situational understanding of their often challenging operational environment, to make the best safety of life and property decisions. The National Oceanic and Atmospheric Administration's (NOAA) Physical Oceanographic Real Time System (PORTS) was developed in response to accidents like the Sunshine Skyway Bridge collision in Tampa, FL in 1980, where the lack of accurate, reliable and timely environmental conditions directly contributed to an accident that resulted in a high loss of life and property. Since that time, PORTS has expanded to over 20 locations around the country, and its capabilities have been continually expanded and improved as well. PORTS primary mission is to prevent maritime accidents. Preventing an accident from occurring is the most cost effective approach and the best way to avoid damage to the environment. When accidents do occur, PORTS data is used to improve the effectiveness of response efforts by providing input for trajectory models and real time conditions for response efforts. However, benefits derived from PORTS go well beyond navigation safety. Another large benefit to the local maritime community is potential efficiencies in optimizing use of the existing water column. PORTS provides information that can be used to make economic decisions to add or offload cargo to a vessel and/or to maintain or adjust transit schedules based upon availability of water depth, strength/timing of tidal currents, and other conditions. PORTS data also helps improve and validate local National Weather Service marine weather forecasts. There are many benefits beyond the local maritime

  14. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  15. Do physiotherapy staff record treatment time accurately? An observational study.

    PubMed

    Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John

    2009-09-01

    To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.

  16. Patient Satisfaction Is Associated With Time With Provider But Not Clinic Wait Time Among Orthopedic Patients.

    PubMed

    Patterson, Brendan M; Eskildsen, Scott M; Clement, R Carter; Lin, Feng-Chang; Olcott, Christopher W; Del Gaizo, Daniel J; Tennant, Joshua N

    2017-01-01

    Clinic wait time is considered an important predictor of patient satisfaction. The goal of this study was to determine whether patient satisfaction among orthopedic patients is associated with clinic wait time and time with the provider. The authors prospectively enrolled 182 patients at their outpatient orthopedic clinic. Clinic wait time was defined as the time between patient check-in and being seen by the surgeon. Time spent with the provider was defined as the total time the patient spent in the examination room with the surgeon. The Consumer Assessment of Healthcare Providers and Systems survey was used to measure patient satisfaction. Factors associated with increased patient satisfaction included patient age and increased time with the surgeon (P=.024 and P=.037, respectively), but not clinic wait time (P=.625). Perceived wait time was subject to a high level of error, and most patients did not accurately report whether they had been waiting longer than 15 minutes to see a provider until they had waited at least 60 minutes (P=.007). If the results of the current study are generalizable, time with the surgeon is associated with patient satisfaction in orthopedic clinics, but wait time is not. Further, the study findings showed that patients in this setting did not have an accurate perception of actual wait time, with many patients underestimating the time they waited to see a provider. Thus, a potential strategy for improving patient satisfaction is to spend more time with each patient, even at the expense of increased wait time. [Orthopedics. 2017; 40(1):43-48.]. Copyright 2016, SLACK Incorporated.

  17. Improving Procedure Start Times and Decreasing Delays in Interventional Radiology: A Department's Quality Improvement Initiative.

    PubMed

    Villarreal, Monica C; Rostad, Bradley S; Wright, Richard; Applegate, Kimberly E

    2015-12-01

    To identify and reduce reasons for delays in procedure start times, particularly the first cases of the day, within the interventional radiology (IR) divisions of the Department of Radiology using principles of continuous quality improvement. An interdisciplinary team representative of the IR and preprocedure/postprocedure care area (PPCA) health care personnel, managers, and data analysts was formed. A standardized form was used to document both inpatient and outpatient progress through the PPCA and IR workflow in six rooms and to document reasons for delays. Data generated were used to identify key problems areas, implement improvement interventions, and monitor their effects. Project duration was 6 months. The average number of on-time starts for the first case of the day increased from 23% to 56% (P value < .01). The average number of on-time, scheduled outpatients increased from 30% to 45% (P value < .01). Patient wait time to arrive at treatment room once they were ready for their procedure was reduced on average by 10 minutes (P value < .01). Patient care delay duration per 100 patients was reduced from 30.3 to 21.6 hours (29% reduction). Number of patient care delays per 100 patients was reduced from 46.6 to 40.1 (17% reduction). Top reasons for delay included waiting for consent (26% of delays duration) and laboratory tests (12%). Many complex factors contribute to procedure start time delays within an IR practice. A data-driven and patient-centered, interdisciplinary team approach was effective in reducing delays in IR. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  18. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    PubMed

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  19. The right care, every time: improving adherence to evidence-based guidelines.

    PubMed

    Runnacles, Jane; Roueché, Alice; Lachman, Peter

    2018-02-01

    Guidelines are integral to reducing variation in paediatric care by ensuring that children receive the right care, every time. However, for reasons discussed in this paper, clinicians do not always follow evidence-based guidelines. Strategies to improve guideline usage tend to focus on dissemination and education. These approaches, however, do not address some of the more complex factors that influence whether a guideline is used in clinical practice. In this article, part of the Equipped Quality Improvement series, we outline the literature on barriers to guideline adherence and present practical solutions to address these barriers. Examples outlined include the use of care bundles, integrated care pathways and quality improvement collaboratives. A sophisticated information technology system can improve the use of evidence-based guidelines and provide organisations with valuable data for learning and improvement. Key to success is the support of an organisation that places reliability of service delivery as the way business is done. To do this requires leadership from clinicians in multidisciplinary teams and a system of continual improvement. By learning from successful approaches, we believe that all healthcare organisations can ensure the right care for each patient, every time. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Time delay measurement in the frequency domain

    DOE PAGES

    Durbin, Stephen M.; Liu, Shih -Chieh; Dufresne, Eric M.; ...

    2015-08-06

    Pump–probe studies at synchrotrons using X-ray and laser pulses require accurate determination of the time delay between pulses. This becomes especially important when observing ultrafast responses with lifetimes approaching or even less than the X-ray pulse duration (~100 ps). The standard approach of inspecting the time response of a detector sensitive to both types of pulses can have limitations due to dissimilar pulse profiles and other experimental factors. Here, a simple alternative is presented, where the frequency response of the detector is monitored versus time delay. Measurements readily demonstrate a time resolution of ~1 ps. Improved precision is possible bymore » simply extending the data acquisition time.« less

  1. Implementation of a Transfer Intervention Procedure (TIP) to improve handovers from hospital to home: interrupted time series analysis.

    PubMed

    van Seben, Rosanne; Geerlings, Suzanne E; Verhaegh, Kim J M; Hilders, Carina G J M; Buurman, Bianca M

    2016-09-07

    Accurate and timely patient handovers from hospital to other health care settings are essential in order to provide high quality of care and to ensure patient safety. We aim to investigate the effect of a comprehensive discharge bundle, the Transfer Intervention Procedure (TIP), on the time between discharge and the time when the medical, medication and nursing handovers are sent to the next health care provider. Our goal is to reduce this time to 24 h after hospital discharge. Secondary outcomes are length of hospital stay and unplanned readmission within 30 days rates. The current study is set to implement the TIP, a structured discharge process for all patients admitted to the hospital, with the purpose to provide a safe, reliable and accurate discharge process. Eight hospitals in the Netherlands will implement the TIP on one internal medicine and one surgical ward. An interrupted time series (ITS) analysis, with pre-defined pre and post intervention periods, will be conducted. Patients over the age of 18 admitted for more than 48 h to the participating wards are eligible for inclusion. At least 1000 patients will be included in both the pre-implementation and post-implementation group. The primary outcome is the number of medical, medication and nursing handovers being sent within 24 h after discharge. Secondary outcomes are length of hospital stay and unplanned readmission within 30 days. With regard to potential confounders, data will be collected on patient's characteristics and information regarding the hospitalization. We will use segmented regression methods for analyzing the data, which allows assessing how much TIP changed the outcomes of interest immediately and over time. This study protocol describes the implementation of TIP, which provides the foundation for a safe, reliable and accurate discharge process. If effective, nationwide implementation of the discharge bundle may result from this study protocol. Dutch Trial Registry: NTR5951.

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Real-time data acquisition and alerts may reduce reaction time and improve perfusionist performance during cardiopulmonary bypass.

    PubMed

    Beck, J R; Fung, K; Lopez, H; Mongero, L B; Argenziano, M

    2015-01-01

    Delayed perfusionist identification and reaction to abnormal clinical situations has been reported to contribute to increased mortality and morbidity. The use of automated data acquisition and compliance safety alerts has been widely accepted in many industries and its use may improve operator performance. A study was conducted to evaluate the reaction time of perfusionists with and without the use of compliance alert. A compliance alert is a computer-generated pop-up banner on a pump-mounted computer screen to notify the user of clinical parameters outside of a predetermined range. A proctor monitored and recorded the time from an alert until the perfusionist recognized the parameter was outside the desired range. Group one included 10 cases utilizing compliance alerts. Group 2 included 10 cases with the primary perfusionist blinded to the compliance alerts. In Group 1, 97 compliance alerts were identified and, in group two, 86 alerts were identified. The average reaction time in the group using compliance alerts was 3.6 seconds. The average reaction time in the group not using the alerts was nearly ten times longer than the group using computer-assisted, real-time data feedback. Some believe that real-time computer data acquisition and feedback improves perfusionist performance and may allow clinicians to identify and rectify potentially dangerous situations. © The Author(s) 2014.

  4. Improved Time to Publication in Journal of Geophysical Research-Atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Gouw, Joost A.; Ghan, Steven J.; Pryor, Sara

    Timely publication of manuscripts is important to authors and readers. AGU has significantly accelerated both the review and production processes for the Journal of Geophysical Research-Atmospheres (JGR-Atmospheres). Via a number of mechanisms (e.g., shortening the time allotted for reviewer selection, manuscript reviews, and revisions), the mean time to first decision has been decreased from 98 days in 2007 to 50 days in 2011, and the mean time to final decision has been decreased from 132 days in 2007 to 71 days in 2011. By implementing a new content management system, adjusting the workflow for improved efficiency, requesting authors to proofreadmore » their manuscripts quicker, and improving monitoring and follow-up to author and vendor queries, the mean production time from manuscript acceptance to publication has been decreased from 128 days in 2010 to only 56 days in 2012. Thus, in the past few years the mean time to publication of JGRAtmospheres has been cut in half. These milestones have been achieved with no loss of quality of presentation or content. In addition, online posting of "papers in press" on JGR-Atmosphere’s home page typically occurs within a few days after acceptance. JGR-Atmospheres editors thank manuscript reviewers, authors, and AGU staff who have greatly contributed to the more timely review and publication processes. This information will be updated periodically on the JGR-Atmospheres home page. A chart showing the average time from acceptance to publication for all of AGU’s journals is available at http://www.agu.org/pubs/pdf/31May2012_Timeliness_Chart.pdf.« less

  5. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    NASA Astrophysics Data System (ADS)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  6. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    USDA-ARS?s Scientific Manuscript database

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  7. An instrument for rapid, accurate, determination of fuel moisture content

    Treesearch

    Stephen S. Sackett

    1980-01-01

    Moisture contents of dead and living fuels are key variables in fire behavior. Accurate, real-time fuel moisture data are required for prescribed burning and wildfire behavior predictions. The convection oven method has become the standard for direct fuel moisture content determination. Efforts to quantify fuel moisture through indirect methods have not been...

  8. Gctf: Real-time CTF determination and correction

    PubMed Central

    Zhang, Kai

    2016-01-01

    Accurate estimation of the contrast transfer function (CTF) is critical for a near-atomic resolution cryo electron microscopy (cryoEM) reconstruction. Here, a GPU-accelerated computer program, Gctf, for accurate and robust, real-time CTF determination is presented. The main target of Gctf is to maximize the cross-correlation of a simulated CTF with the logarithmic amplitude spectra (LAS) of observed micrographs after background subtraction. Novel approaches in Gctf improve both speed and accuracy. In addition to GPU acceleration (e.g. 10–50×), a fast ‘1-dimensional search plus 2-dimensional refinement (1S2R)’ procedure further speeds up Gctf. Based on the global CTF determination, the local defocus for each particle and for single frames of movies is accurately refined, which improves CTF parameters of all particles for subsequent image processing. Novel diagnosis method using equiphase averaging (EPA) and self-consistency verification procedures have also been implemented in the program for practical use, especially for aims of near-atomic reconstruction. Gctf is an independent program and the outputs can be easily imported into other cryoEM software such as Relion (Scheres, 2012) and Frealign (Grigorieff, 2007). The results from several representative datasets are shown and discussed in this paper. PMID:26592709

  9. Comprehensive identification and structural characterization of target components from Gelsemium elegans by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass databases combined with MS/MS spectra.

    PubMed

    Liu, Yan-Chun; Xiao, Sa; Yang, Kun; Ling, Li; Sun, Zhi-Liang; Liu, Zhao-Ying

    2017-06-01

    This study reports an applicable analytical strategy of comprehensive identification and structure characterization of target components from Gelsemium elegans by using high-performance liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QqTOF MS) based on the use of accurate mass databases combined with MS/MS spectra. The databases created included accurate masses and elemental compositions of 204 components from Gelsemium and their structural data. The accurate MS and MS/MS spectra were acquired through data-dependent auto MS/MS mode followed by an extraction of the potential compounds from the LC-QqTOF MS raw data of the sample. The same was matched using the databases to search for targeted components in the sample. The structures for detected components were tentatively characterized by manually interpreting the accurate MS/MS spectra for the first time. A total of 57 components have been successfully detected and structurally characterized from the crude extracts of G. elegans, but has failed to differentiate some isomers. This analytical strategy is generic and efficient, avoids isolation and purification procedures, enables a comprehensive structure characterization of target components of Gelsemium and would be widely applicable for complicated mixtures that are derived from Gelsemium preparations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  11. Real-time video communication improves provider performance in a simulated neonatal resuscitation.

    PubMed

    Fang, Jennifer L; Carey, William A; Lang, Tara R; Lohse, Christine M; Colby, Christopher E

    2014-11-01

    To determine if a real-time audiovisual link with a neonatologist, termed video-assisted resuscitation or VAR, improves provider performance during a simulated neonatal resuscitation scenario. Using high-fidelity simulation, 46 study participants were presented with a neonatal resuscitation scenario. The control group performed independently, while the intervention group utilized VAR. Time to effective ventilation was compared using Wilcoxon rank sum tests. Providers' use of the corrective steps for ineffective ventilation per the NRP algorithm was compared using Cochran-Armitage trend tests. The time needed to establish effective ventilation was significantly reduced in the intervention group when compared to the control group (mean time 2 min 42 s versus 4 min 11 s, p<0.001). In the setting of ineffective ventilation, only 35% of control subjects used three or more of the first five corrective steps and none of them used all five steps. Providers in the control group most frequently neglected to open the mouth and increase positive pressure. In contrast, all of those in the intervention group used all of the first five corrective steps, p<0.001. All participants in the control group decided to intubate the infant to establish effective ventilation, compared to none in the intervention group, p<0.001. Using VAR during a simulated neonatal resuscitation scenario significantly reduces the time to establish effective ventilation and improves provider adherence to NRP guidelines. This technology may be a means for regional centers to support local providers during a neonatal emergency to improve patient safety and improve neonatal outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    PubMed

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  13. Improving time-delay cosmography with spatially resolved kinematics

    NASA Astrophysics Data System (ADS)

    Shajib, Anowar J.; Treu, Tommaso; Agnello, Adriano

    2018-01-01

    Strongly gravitational lensed quasars can be used to measure the so-called time-delay distance DΔt, and thus the Hubble constant H0 and other cosmological parameters. Stellar kinematics of the deflector galaxy play an essential role in this measurement by: (i) helping break the mass-sheet degeneracy; (ii) determining in principle the angular diameter distance Dd to the deflector and thus further improving the cosmological constraints. In this paper we simulate observations of lensed quasars with integral field spectrographs and show that spatially resolved kinematics of the deflector enables further progress by helping break the mass-anisotropy degeneracy. Furthermore, we use our simulations to obtain realistic error estimates with current/upcoming instruments like OSIRIS on Keck and NIRSPEC on the James Webb Space Telescope for both distances (typically ∼6 per cent on DΔt and ∼10 per cent on Dd). We use the error estimates to compute cosmological forecasts for the sample of nine lenses that currently have well-measured time delays and deep Hubble Space Telescope images and for a sample of 40 lenses that is projected to be available in a few years through follow-up of candidates found in ongoing wide field surveys. We find that H0 can be measured with 2 per cent (1 per cent) precision from nine (40) lenses in a flat Λcold dark matter cosmology. We study several other cosmological models beyond the flat Λcold dark matter model and find that time-delay lenses with spatially resolved kinematics can greatly improve the precision of the cosmological parameters measured by cosmic microwave background data.

  14. Design Alternatives to Improve Access Time Performance of Disk Drives Under DOS and UNIX

    NASA Astrophysics Data System (ADS)

    Hospodor, Andy

    For the past 25 years, improvements in CPU performance have overshadowed improvements in the access time performance of disk drives. CPU performance has been slanted towards greater instruction execution rates, measured in millions of instructions per second (MIPS). However, the slant for performance of disk storage has been towards capacity and corresponding increased storage densities. The IBM PC, introduced in 1982, processed only a fraction of a MIP. Follow-on CPUs, such as the 80486 and 80586, sported 5-10 MIPS by 1992. Single user PCs and workstations, with one CPU and one disk drive, became the dominant application, as implied by their production volumes. However, disk drives did not enjoy a corresponding improvement in access time performance, although the potential still exists. The time to access a disk drive improves (decreases) in two ways: by altering the mechanical properties of the drive or by adding cache to the drive. This paper explores the improvement to access time performance of disk drives using cache, prefetch, faster rotation rates, and faster seek acceleration.

  15. Measures to Improve Diagnostic Safety in Clinical Practice.

    PubMed

    Singh, Hardeep; Graber, Mark L; Hofer, Timothy P

    2016-10-20

    Timely and accurate diagnosis is foundational to good clinical practice and an essential first step to achieving optimal patient outcomes. However, a recent Institute of Medicine report concluded that most of us will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through better measurement of diagnostic performance. The diagnostic process is a dynamic team-based activity that involves uncertainty, plays out over time, and requires effective communication and collaboration among multiple clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop measures to improve diagnostic performance could move forward at a time when the scientific foundation needed to inform measurement is still evolving. We highlight challenges and opportunities for developing potential measures of "diagnostic safety" related to clinical diagnostic errors and associated preventable diagnostic harm. In doing so, we propose a starter set of measurement concepts for initial consideration that seem reasonably related to diagnostic safety and call for these to be studied and further refined. This would enable safe diagnosis to become an organizational priority and facilitate quality improvement. Health-care systems should consider measurement and evaluation of diagnostic performance as essential to timely and accurate diagnosis and to the reduction of preventable diagnostic harm.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  16. Quantitative analysis of naphthenic acids in water by liquid chromatography-accurate mass time-of-flight mass spectrometry.

    PubMed

    Hindle, Ralph; Noestheden, Matthew; Peru, Kerry; Headley, John

    2013-04-19

    This study details the development of a routine method for quantitative analysis of oil sands naphthenic acids, which are a complex class of compounds found naturally and as contaminants in oil sands process waters from Alberta's Athabasca region. Expanding beyond classical naphthenic acids (CnH2n-zO2), those compounds conforming to the formula CnH2n-zOx (where 2≥x≤4) were examined in commercial naphthenic acid and environmental water samples. HPLC facilitated a five-fold reduction in ion suppression when compared to the more commonly used flow injection analysis. A comparison of 39 model naphthenic acids revealed significant variability in response factors, demonstrating the necessity of using naphthenic acid mixtures for quantitation, rather than model compounds. It was also demonstrated that naphthenic acidic heterogeneity (commercial and environmental) necessitates establishing a single NA mix as the standard against which all quantitation is performed. The authors present the first ISO17025 accredited method for the analysis of naphthenic acids in water using HPLC high resolution accurate mass time-of-flight mass spectrometry. The method detection limit was 1mg/L total oxy-naphthenic acids (Sigma technical mix). Copyright © 2013 Elsevier B.V. All rights reserved.

  17. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function

  18. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    PubMed

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring

    NASA Astrophysics Data System (ADS)

    Yin, J.; Ng, R.; Nakata, N.

    2017-12-01

    Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.

  20. Accurate, Sensitive, and Precise Multiplexed Proteomics Using the Complement Reporter Ion Cluster

    DOE PAGES

    Sonnett, Matthew; Yeung, Eyan; Wuhr, Martin

    2018-03-09

    We present that quantitative analysis of proteomes across multiple time points, organelles, and perturbations is essential for understanding both fundamental biology and disease states. The development of isobaric tags (e.g. TMT) have enabled the simultaneous measurement of peptide abundances across several different conditions. These multiplexed approaches are promising in principle because of advantages in throughput and measurement quality. However, in practice existing multiplexing approaches suffer from key limitations. In its simple implementation (TMT-MS2), measurements are distorted by chemical noise leading to poor measurement accuracy. The current state-of-the-art (TMT-MS3) addresses this, but requires specialized quadrupole-iontrap-Orbitrap instrumentation. The complement reporter ion approachmore » (TMTc) produces high accuracy measurements and is compatible with many more instruments, like quadrupole-Orbitraps. However, the required deconvolution of the TMTc cluster leads to poor measurement precision. Here, we introduce TMTc+, which adds the modeling of the MS2-isolation step into the deconvolution algorithm. The resulting measurements are comparable in precision to TMT-MS3/MS2. The improved duty cycle, and lower filtering requirements make TMTc+ more sensitive than TMT-MS3 and comparable with TMT-MS2. At the same time, unlike TMT-MS2, TMTc+ is exquisitely able to distinguish signal from chemical noise even outperforming TMT-MS3. Lastly, we compare TMTc+ to quantitative label-free proteomics of total HeLa lysate and find that TMTc+ quantifies 7.8k versus 3.9k proteins in a 5-plex sample. At the same time the median coefficient of variation improves from 13% to 4%. Furthermore, TMTc+ advances quantitative proteomics by enabling accurate, sensitive, and precise multiplexed experiments on more commonly used instruments.« less

  1. Accurate, Sensitive, and Precise Multiplexed Proteomics Using the Complement Reporter Ion Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnett, Matthew; Yeung, Eyan; Wuhr, Martin

    We present that quantitative analysis of proteomes across multiple time points, organelles, and perturbations is essential for understanding both fundamental biology and disease states. The development of isobaric tags (e.g. TMT) have enabled the simultaneous measurement of peptide abundances across several different conditions. These multiplexed approaches are promising in principle because of advantages in throughput and measurement quality. However, in practice existing multiplexing approaches suffer from key limitations. In its simple implementation (TMT-MS2), measurements are distorted by chemical noise leading to poor measurement accuracy. The current state-of-the-art (TMT-MS3) addresses this, but requires specialized quadrupole-iontrap-Orbitrap instrumentation. The complement reporter ion approachmore » (TMTc) produces high accuracy measurements and is compatible with many more instruments, like quadrupole-Orbitraps. However, the required deconvolution of the TMTc cluster leads to poor measurement precision. Here, we introduce TMTc+, which adds the modeling of the MS2-isolation step into the deconvolution algorithm. The resulting measurements are comparable in precision to TMT-MS3/MS2. The improved duty cycle, and lower filtering requirements make TMTc+ more sensitive than TMT-MS3 and comparable with TMT-MS2. At the same time, unlike TMT-MS2, TMTc+ is exquisitely able to distinguish signal from chemical noise even outperforming TMT-MS3. Lastly, we compare TMTc+ to quantitative label-free proteomics of total HeLa lysate and find that TMTc+ quantifies 7.8k versus 3.9k proteins in a 5-plex sample. At the same time the median coefficient of variation improves from 13% to 4%. Furthermore, TMTc+ advances quantitative proteomics by enabling accurate, sensitive, and precise multiplexed experiments on more commonly used instruments.« less

  2. Improving real-time inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2015-08-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.

  3. Improving care and efficiency: appointment times in a haemodialysis unit.

    PubMed

    Lunts, P

    2002-01-01

    Shortage of nurses and dialysis spaces and the desire to improve patient care are the two main driving forces in the dialysis field today. This paper suggests that these issues can be addressed by organisational change. We describe a simple, dramatically effective but rarely used example - the effect on a haemodialysis unit of the introduction of patient appointment times. This paper will demonstrate that appointment times can be highly effective in reducing waiting times for patients and in utilizing staff and resources more efficiently, as long as there is commitment from key staff to implement and maintain them effectively

  4. Weekly Checks Improve Real-Time Prehospital ECG Transmission in Suspected STEMI.

    PubMed

    D'Arcy, Nicole T; Bosson, Nichole; Kaji, Amy H; Bui, Quang T; French, William J; Thomas, Joseph L; Elizarraraz, Yvonne; Gonzalez, Natalia; Garcia, Jose; Niemann, James T

    2018-06-01

    IntroductionField identification of ST-elevation myocardial infarction (STEMI) and advanced hospital notification decreases first-medical-contact-to-balloon (FMC2B) time. A recent study in this system found that electrocardiogram (ECG) transmission following a STEMI alert was frequently unsuccessful.HypothesisInstituting weekly test ECG transmissions from paramedic units to the hospital would increase successful transmission of ECGs and decrease FMC2B and door-to-balloon (D2B) times. This was a natural experiment of consecutive patients with field-identified STEMI transported to a single percutaneous coronary intervention (PCI)-capable hospital in a regional STEMI system before and after implementation of scheduled test ECG transmissions. In November 2014, paramedic units began weekly test transmissions. The mobile intensive care nurse (MICN) confirmed the transmission, or if not received, contacted the paramedic unit and the department's nurse educator to identify and resolve the problem. Per system-wide protocol, paramedics transmit all ECGs with interpretation of STEMI. Receiving hospitals submit patient data to a single registry as part of ongoing system quality improvement. The frequency of successful ECG transmission and time to intervention (FMC2B and D2B times) in the 18 months following implementation was compared to the 10 months prior. Post-implementation, the time the ECG transmission was received was also collected to determine the transmission gap time (time from ECG acquisition to ECG transmission received) and the advanced notification time (time from ECG transmission received to patient arrival). There were 388 patients with field ECG interpretations of STEMI, 131 pre-intervention and 257 post-intervention. The frequency of successful transmission post-intervention was 73% compared to 64% prior; risk difference (RD)=9%; 95% CI, 1-18%. In the post-intervention period, the median FMC2B time was 79 minutes (inter-quartile range [IQR]=68-102) versus 86

  5. Time-dependent resilience assessment and improvement of urban infrastructure systems

    NASA Astrophysics Data System (ADS)

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  6. Time-dependent resilience assessment and improvement of urban infrastructure systems.

    PubMed

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  7. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  8. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  9. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  10. How to achieve more accurate comparisons in organ donation activity: time to effectiveness indicators.

    PubMed

    Deulofeu, R; Bodí, M A; Twose, J; López, P

    2010-06-01

    We are used to comparisons of activity using donation or transplantation population (pmp) rates between regions or countries, without a further evaluation of the process. But crude pmp rates do not clearly reflect real transplantation capacity, because organ procurement does not finish with the donation step; it is also necessary to know the utilization of the obtained organs. The objective of this study was to present methods and indicators deemed necessary to evaluate the effectiveness of the process. We have proposed the use of simple definitions and indicators to more accurately measure and compare the effectiveness of the total organ procurement process. To illustrate the use and performance of these indicators, we have presented the donation and transplantation activity in Catalonia from 2002 to 2007.

  11. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A highly accurate wireless digital sun sensor based on profile detecting and detector multiplexing technologies

    NASA Astrophysics Data System (ADS)

    Wei, Minsong; Xing, Fei; You, Zheng

    2017-01-01

    The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.

  13. Improvements and Additions to NASA Near Real-Time Earth Imagery

    NASA Technical Reports Server (NTRS)

    Cechini, Matthew; Boller, Ryan; Baynes, Kathleen; Schmaltz, Jeffrey; DeLuca, Alexandar; King, Jerome; Thompson, Charles; Roberts, Joe; Rodriguez, Joshua; Gunnoe, Taylor; hide

    2016-01-01

    For many years, the NASA Global Imagery Browse Services (GIBS) has worked closely with the Land, Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) system to provide near real-time imagery visualizations of AIRS (Atmospheric Infrared Sounder), MLS (Microwave Limb Sounder), MODIS (Moderate Resolution Imaging Spectrometer), OMI (Ozone Monitoring Instrument), and recently VIIRS (Visible Infrared Imaging Radiometer Suite) science parameters. These visualizations are readily available through standard web services and the NASA Worldview client. Access to near real-time imagery provides a critical capability to GIBS and Worldview users. GIBS continues to focus on improving its commitment to providing near real-time imagery for end-user applications. The focus of this presentation will be the following completed or planned GIBS system and imagery enhancements relating to near real-time imagery visualization.

  14. Improved Timing Scheme for Spaceborne Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Berkun, Andrew; Fischman, Mark

    2004-01-01

    An improved timing scheme has been conceived for operation of a scanning satellite-borne rain-measuring radar system. The scheme allows a real-time-generated solution, which is required for auto targeting. The current timing scheme used in radar satellites involves pre-computing a solution that allows the instrument to catch all transmitted pulses without transmitting and receiving at the same time. Satellite altitude requires many pulses in flight at any time, and the timing solution to prevent transmit and receive operations from colliding is usually found iteratively. The proposed satellite has a large number of scanning beams each with a different range to target and few pulses per beam. Furthermore, the satellite will be self-targeting, so the selection of which beams are used will change from sweep to sweep. The proposed timing solution guarantees no echo collisions, can be generated using simple FPGA-based hardware in real time, and can be mathematically shown to deliver the maximum number of pulses per second, given the timing constraints. The timing solution is computed every sweep, and consists of three phases: (1) a build-up phase, (2) a feedback phase, and (3) a build-down phase. Before the build-up phase can begin, the beams to be transmitted are sorted in numerical order. The numerical order of the beams is also the order from shortest range to longest range. Sorting the list guarantees no pulse collisions. The build-up phase begins by transmitting the first pulse from the first beam on the list. Transmission of this pulse starts a delay counter, which stores the beam number and the time delay to the beginning of the receive window for that beam. The timing generator waits just long enough to complete the transmit pulse plus one receive window, then sends out the second pulse. The second pulse starts a second delay counter, which stores its beam number and time delay. This process continues until an output from the first timer indicates there is less

  15. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  16. Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology

    NASA Astrophysics Data System (ADS)

    Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan

    2016-05-01

    This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.

  17. Improved real-time imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)

    1993-01-01

    An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.

  18. Time-diagnostics for improved dynamics experiments at XUV FELs

    NASA Astrophysics Data System (ADS)

    Drescher, Markus; Frühling, Ulrike; Krikunova, Maria; Maltezopoulos, Theophilos; Wieland, Marek

    2010-10-01

    Significantly structured and fluctuating temporal profiles of pulses from self-amplified spontaneous emission free electron lasers as well as their unstable timing require time diagnostics on a single-shot basis. The duration and structure of extreme-ultraviolet (XUV) pulses from the Free Electron Laser (FEL) in Hamburg (FLASH) are becoming accessible using a variation of the streak camera principle, where photoemitted electrons are energetically streaked in the electric field component of a terahertz electromagnetic wave. The timing with respect to an independently generated laser pulse can be measured in an XUV/laser cross-correlator, based on a non-collinear superposition of both pulses on a solid state surface and detection of XUV-induced modulations of its reflectivity for visible light. Sorting of data according to the measured timing dramatically improves the temporal resolution of an experiment sampling the relaxation of transient electronic states in xenon after linear- as well as nonlinear excitation with intense XUV pulses from FLASH.

  19. Species Distribution 2.0: An Accurate Time- and Cost-Effective Method of Prospection Using Street View Imagery

    PubMed Central

    Schwoertzig, Eugénie; Millon, Alexandre

    2016-01-01

    Species occurrence data provide crucial information for biodiversity studies in the current context of global environmental changes. Such studies often rely on a limited number of occurrence data collected in the field and on pseudo-absences arbitrarily chosen within the study area, which reduces the value of these studies. To overcome this issue, we propose an alternative method of prospection using geo-located street view imagery (SVI). Following a standardised protocol of virtual prospection using both vertical (aerial photographs) and horizontal (SVI) perceptions, we have surveyed 1097 randomly selected cells across Spain (0.1x0.1 degree, i.e. 20% of Spain) for the presence of Arundo donax L. (Poaceae). In total we have detected A. donax in 345 cells, thus substantially expanding beyond the now two-centuries-old field-derived record, which described A. donax only 216 cells. Among the field occurrence cells, 81.1% were confirmed by SVI prospection to be consistent with species presence. In addition, we recorded, by SVI prospection, 752 absences, i.e. cells where A. donax was considered absent. We have also compared the outcomes of climatic niche modeling based on SVI data against those based on field data. Using generalized linear models fitted with bioclimatic predictors, we have found SVI data to provide far more compelling results in terms of niche modeling than does field data as classically used in SDM. This original, cost- and time-effective method provides the means to accurately locate highly visible taxa, reinforce absence data, and predict species distribution without long and expensive in situ prospection. At this time, the majority of available SVI data is restricted to human-disturbed environments that have road networks. However, SVI is becoming increasingly available in natural areas, which means the technique has considerable potential to become an important factor in future biodiversity studies. PMID:26751565

  20. Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior

    PubMed Central

    Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini

    2009-01-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants’ ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale. PMID:17516803

  1. Accurate identification of fear facial expressions predicts prosocial behavior.

    PubMed

    Marsh, Abigail A; Kozak, Megan N; Ambady, Nalini

    2007-05-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants' ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale.

  2. 'Scalp coordinate system': a new tool to accurately describe cutaneous lesions on the scalp: a pilot study.

    PubMed

    Alexander, William; Miller, George; Alexander, Preeya; Henderson, Michael A; Webb, Angela

    2018-06-12

    Skin cancers are extremely common and the incidence increases with age. Care for patients with multiple or complicated skin cancers often require multidisciplinary input involving a general practitioner, dermatologist, plastic surgeon and/or radiation oncologist. Timely, efficient care of these patients relies on precise and effective communication between all parties. Until now, descriptions regarding the location of lesions on the scalp have been inaccurate, which can lead to error with the incorrect lesion being excised or biopsied. A novel technique for accurately and efficiently describing the location of lesions on the scalp, using a coordinate system, is described (the 'scalp coordinate system' (SCS)). This method was tested in a pilot study by clinicians typically involved in the care of patients with cutaneous malignancies. A mannequin scalp was used in the study. The SCS significantly improved the accuracy in the ability to both describe and locate lesions on the scalp. This improved accuracy comes at a minor time cost. The direct and indirect costs arising from poor communication between medical subspecialties (particularly relevant in surgical procedures) are immense. An effective tool used by all involved clinicians is long overdue particularly in patients with scalps with extensive actinic damage, scarring or innocuous biopsy sites. The SCS provides the opportunity to improve outcomes for both the patient and healthcare system. © 2018 Royal Australasian College of Surgeons.

  3. Precise terrestrial time: A means for improved ballistic missile guidance analysis

    NASA Technical Reports Server (NTRS)

    Ehrsam, E. E.; Cresswell, S. A.; Mckelvey, G. R.; Matthews, F. L.

    1978-01-01

    An approach developed to improve the ground instrumentation time tagging accuracy and adapted to support the Minuteman ICBM program is desired. The Timing Insertion Unit (TIU) technique produces a telemetry data time tagging resolution of one tenth of a microsecond, with a relative intersite accuracy after corrections and velocity data (range, azimuth, elevation and range rate) also used in missile guidance system analysis can be correlated to within ten microseconds of the telemetry guidance data. This requires precise timing synchronization between the metric and telemetry instrumentation sites. The timing synchronization can be achieved by using the radar automatic phasing system time correlation methods. Other time correlation techniques such as Television (TV) Line-10 and the Geostationary Operational Environmental Satellites (GEOS) terrestial timing receivers are also considered.

  4. Homomorphic Filtering for Improving Time Synchronization in Wireless Networks.

    PubMed

    Castillo-Secilla, José María; Palomares, José Manuel; León, Fernando; Olivares, Joaquín

    2017-04-20

    Wireless sensor networks are used to sample the environment in a distributed way. Therefore, it is mandatory for all of the measurements to be tightly synchronized in order to guarantee that every sensor is sampling the environment at the exact same instant of time. The synchronization drift gets bigger in environments suffering from temperature variations. Thus, this work is focused on improving time synchronization under deployments with temperature variations. The working hypothesis demonstrated in this work is that the clock skew of two nodes (the ratio of the real frequencies of the oscillators) is composed of a multiplicative combination of two main components: the clock skew due to the variations between the cut of the crystal of each oscillator and the clock skew due to the different temperatures affecting the nodes. By applying a nonlinear filtering, the homomorphic filtering, both components are separated in an effective way. A correction factor based on temperature, which can be applied to any synchronization protocol, is proposed. For testing it, an improvement of the FTSP synchronization protocol has been developed and physically tested under temperature variation scenarios using TelosB motes flashed with the IEEE 802.15.4 implementation supplied by TinyOS.

  5. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance

  6. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  7. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Ferrer, Imma; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750+/-0.0049 amu and 270.0786+/-0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098+/-0.0061 amu and 314.1153+/-0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  8. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    PubMed

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  9. Improving laboratory results turnaround time by reducing pre analytical phase.

    PubMed

    Khalifa, Mohamed; Khalid, Parwaiz

    2014-01-01

    Laboratory turnaround time is considered one of the most important indicators of work efficiency in hospitals, physicians always need timely results to take effective clinical decisions especially in the emergency department where these results can guide physicians whether to admit patients to the hospital, discharge them home or do further investigations. A retrospective data analysis study was performed to identify the effects of ER and Lab staff training on new routines for sample collection and transportation on the pre-analytical phase of turnaround time. Renal profile tests requested by the ER and performed in 2013 has been selected as a sample, and data about 7,519 tests were retrieved and analyzed to compare turnaround time intervals before and after implementing new routines. Results showed significant time reduction on "Request to Sample Collection" and "Collection to In Lab Delivery" time intervals with less significant improvement on the analytical phase of the turnaround time.

  10. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  11. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  12. Timing performance comparison of digital methods in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Aykac, Mehmet; Hong, Inki; Cho, Sanghee

    2010-11-01

    Accurate timing information is essential in positron emission tomography (PET). Recent improvements in high speed electronics made digital methods more attractive to find alternative solutions to create a time mark for an event. Two new digital methods (mean PMT pulse model, MPPM, and median filtered zero crossing method, MFZCM) were introduced in this work and compared to traditional methods such as digital leading edge (LE) and digital constant fraction discrimination (CFD). In addition, the performances of all four digital methods were compared to analog based LE and CFD. The time resolution values for MPPM and MFZCM were measured below 300 ps at 1.6 GS/s and above that was similar to the analog based coincidence timing results. In addition, the two digital methods were insensitive to the changes in threshold setting that might give some improvement in system dead time.

  13. Measure accurately, Act rapidly, and Partner with patients: An intuitive and practical three-part framework to guide efforts to improve hypertension control.

    PubMed

    Boonyasai, Romsai T; Rakotz, Michael K; Lubomski, Lisa H; Daniel, Donna M; Marsteller, Jill A; Taylor, Kathryn S; Cooper, Lisa A; Hasan, Omar; Wynia, Matthew K

    2017-07-01

    Hypertension is the leading cause of cardiovascular disease in the United States and worldwide. It also provides a useful model for team-based chronic disease management. This article describes the M.A.P. checklists: a framework to help practice teams summarize best practices for providing coordinated, evidence-based care to patients with hypertension. Consisting of three domains-Measure Accurately; Act Rapidly; and Partner With Patients, Families, and Communities-the checklists were developed by a team of clinicians, hypertension experts, and quality improvement experts through a multistep process that combined literature review, iterative feedback from a panel of internationally recognized experts, and pilot testing among a convenience sample of primary care practices in two states. In contrast to many guidelines, the M.A.P. checklists specifically target practice teams, instead of individual clinicians, and are designed to be brief, cognitively easy to consume and recall, and accessible to healthcare workers from a range of professional backgrounds. ©2017 Wiley Periodicals, Inc.

  14. Using lean principles to improve outpatient adult infusion clinic chemotherapy preparation turnaround times.

    PubMed

    Lamm, Matthew H; Eckel, Stephen; Daniels, Rowell; Amerine, Lindsey B

    2015-07-01

    The workflow and chemotherapy preparation turnaround times at an adult infusion clinic were evaluated to identify opportunities to optimize workflow and efficiency. A three-phase study using Lean Six Sigma methodology was conducted. In phase 1, chemotherapy turnaround times in the adult infusion clinic were examined one year after the interim goal of a 45-minute turnaround time was established. Phase 2 implemented various experiments including a five-day Kaizen event, using lean principles in an effort to decrease chemotherapy preparation turnaround times in a controlled setting. Phase 3 included the implementation of process-improvement strategies identified during the Kaizen event, coupled with a final refinement of operational processes. In phase 1, the mean turnaround time for all chemotherapy preparations decreased from 60 to 44 minutes, and a mean of 52 orders for adult outpatient chemotherapy infusions was received each day. After installing new processes, the mean turnaround time had improved to 37 minutes for each chemotherapy preparation in phase 2. In phase 3, the mean turnaround time decreased from 37 to 26 minutes. The overall mean turnaround time was reduced by 26 minutes, representing a 57% decrease in turnaround times in 19 months through the elimination of waste and the implementation of lean principles. This reduction was accomplished through increased efficiencies in the workplace, with no addition of human resources. Implementation of Lean Six Sigma principles improved workflow and efficiency at an adult infusion clinic and reduced the overall chemotherapy turnaround times from 60 to 26 minutes. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  15. Real-time structured light intraoral 3D measurement pipeline

    NASA Astrophysics Data System (ADS)

    Gheorghe, Radu; Tchouprakov, Andrei; Sokolov, Roman

    2013-02-01

    Computer aided design and manufacturing (CAD/CAM) is increasingly becoming a standard feature and service provided to patients in dentist offices and denture manufacturing laboratories. Although the quality of the tools and data has slowly improved in the last years, due to various surface measurement challenges, practical, accurate, invivo, real-time 3D high quality data acquisition and processing still needs improving. Advances in GPU computational power have allowed for achieving near real-time 3D intraoral in-vivo scanning of patient's teeth. We explore in this paper, from a real-time perspective, a hardware-software-GPU solution that addresses all the requirements mentioned before. Moreover we exemplify and quantify the hard and soft deadlines required by such a system and illustrate how they are supported in our implementation.

  16. Continuous quality improvement intervention for adolescent and young adult HIV testing services in Kenya improves HIV knowledge

    PubMed Central

    Wagner, Anjuli D.; Mugo, Cyrus; Bluemer-Miroite, Shay; Mutiti, Peter M.; Wamalwa, Dalton C.; Bukusi, David; Neary, Jillian; Njuguna, Irene N.; O’Malley, Gabrielle; John-Stewart, Grace C.; Slyker, Jennifer A.; Kohler, Pamela K.

    2017-01-01

    Objectives: To determine whether continuous quality improvement (CQI) improves quality of HIV testing services for adolescents and young adults (AYA). Design: CQI was introduced at two HIV testing settings: Youth Centre and Voluntary Counseling and Testing (VCT) Center, at a national referral hospital in Nairobi, Kenya. Methods: Primary outcomes were AYA satisfaction with HIV testing services, intent to return, and accurate HIV prevention and transmission knowledge. Healthcare worker (HCW) satisfaction assessed staff morale. T tests and interrupted time series analysis using Prais–Winsten regression and generalized estimating equations accounting for temporal trends and autocorrelation were conducted. Results: There were 172 AYA (Youth Centre = 109, VCT = 63) during 6 baseline weeks and 702 (Youth Centre = 454, VCT = 248) during 24 intervention weeks. CQI was associated with an immediate increase in the proportion of AYA with accurate knowledge of HIV transmission at Youth Centre: 18 vs. 63% [adjusted risk difference (aRD) 0.42,95% confidence interval (CI) 0.21 to 0.63], and a trend at VCT: 38 vs. 72% (aRD 0.30, 95% CI −0.04 to 0.63). CQI was associated with an increase in the proportion of AYA with accurate HIV prevention knowledge in VCT: 46 vs. 61% (aRD 0.39, 95% CI 0.02–0.76), but not Youth Centre (P = 0.759). In VCT, CQI showed a trend towards increased intent to retest (4.0 vs. 4.3; aRD 0.78, 95% CI −0.11 to 1.67), but not at Youth Centre (P = 0.19). CQI was not associated with changes in AYA satisfaction, which was high during baseline and intervention at both clinics (P = 0.384, P = 0.755). HCW satisfaction remained high during intervention and baseline (P = 0.746). Conclusion: CQI improved AYA knowledge and did not negatively impact HCW satisfaction. Quality improvement interventions may be useful to improve adolescent-friendly service delivery. PMID:28665882

  17. Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry Vladimirovich; Yee, Helen M C.

    2013-01-01

    The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.

  18. Filtering Raw Terrestrial Laser Scanning Data for Efficient and Accurate Use in Geomorphologic Modeling

    NASA Astrophysics Data System (ADS)

    Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.

    2011-12-01

    Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.

  19. Improved artificial bee colony algorithm for vehicle routing problem with time windows

    PubMed Central

    Yan, Qianqian; Zhang, Mengjie; Yang, Yunong

    2017-01-01

    This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252

  20. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  1. 3D surface voxel tracing corrector for accurate bone segmentation.

    PubMed

    Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi

    2018-06-18

    For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two

  2. Hospitalist time usage and cyclicality: opportunities to improve efficiency.

    PubMed

    Kim, Christopher S; Lovejoy, William; Paulsen, Michael; Chang, Robert; Flanders, Scott A

    2010-01-01

    Academic medical centers (AMCs) have a constrained resident work force. Many AMCs have increased the use of nonresident service hospitalists to manage continued growth in clinical volume. To optimize their time in the hospital, it is important to understand hospitalists' work flow. We performed a time-motion study of hospitalists carrying the admission pager throughout the 3 types of shifts we have at our hospital (day shift, swing shift, and night shift). Tertiary academic medical center in the Midwest. Hospitalists spend about 15% of their time on direct patient care, and two-thirds of their time on indirect patient care. Of the indirect activities, communication and documentation dominate. Travel demands make up over 7% of a hospitalists' time. There are spikes in indirect patient care, followed closely by spikes in direct patient care, at shift changes. At our AMC, indirect patient care activities accounted for the majority of the admitting hospitalists' time spent in the hospital, with documentation and communication dominating this time. Travel takes a significant fraction of hospitalists' time. There is also a cyclical nature to activities performed throughout the day, which can cause patient delays and impose variability on support services. There is a need for both service-specific and systemic improvements for AMCs to efficiently manage further growth in their inpatient volume. (c) 2010 Society of Hospital Medicine.

  3. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  4. Motion-adapted catheter navigation with real-time instantiation and improved visualisation

    PubMed Central

    Kwok, Ka-Wai; Wang, Lichao; Riga, Celia; Bicknell, Colin; Cheshire, Nicholas; Yang, Guang-Zhong

    2014-01-01

    The improvements to catheter manipulation by the use of robot-assisted catheter navigation for endovascular procedures include increased precision, stability of motion and operator comfort. However, navigation through the vasculature under fluoroscopic guidance is still challenging, mostly due to physiological motion and when tortuous vessels are involved. In this paper, we propose a motion-adaptive catheter navigation scheme based on shape modelling to compensate for these dynamic effects, permitting predictive and dynamic navigations. This allows for timed manipulations synchronised with the vascular motion. The technical contribution of the paper includes the following two aspects. Firstly, a dynamic shape modelling and real-time instantiation scheme based on sparse data obtained intra-operatively is proposed for improved visualisation of the 3D vasculature during endovascular intervention. Secondly, a reconstructed frontal view from the catheter tip using the derived dynamic model is used as an interventional aid to user guidance. To demonstrate the practical value of the proposed framework, a simulated aortic branch cannulation procedure is used with detailed user validation to demonstrate the improvement in navigation quality and efficiency. PMID:24744817

  5. Integrating real-time subsurface hydrologic monitoring with empirical rainfall thresholds to improve landslide early warning

    USGS Publications Warehouse

    Mirus, Benjamin B.; Becker, Rachel E.; Baum, Rex L.; Smith, Joel B.

    2018-01-01

    Early warning for rainfall-induced shallow landsliding can help reduce fatalities and economic losses. Although these commonly occurring landslides are typically triggered by subsurface hydrological processes, most early warning criteria rely exclusively on empirical rainfall thresholds and other indirect proxies for subsurface wetness. We explore the utility of explicitly accounting for antecedent wetness by integrating real-time subsurface hydrologic measurements into landslide early warning criteria. Our efforts build on previous progress with rainfall thresholds, monitoring, and numerical modeling along the landslide-prone railway corridor between Everett and Seattle, Washington, USA. We propose a modification to a previously established recent versus antecedent (RA) cumulative rainfall thresholds by replacing the antecedent 15-day rainfall component with an average saturation observed over the same timeframe. We calculate this antecedent saturation with real-time telemetered measurements from five volumetric water content probes installed in the shallow subsurface within a steep vegetated hillslope. Our hybrid rainfall versus saturation (RS) threshold still relies on the same recent 3-day rainfall component as the existing RA thresholds, to facilitate ready integration with quantitative precipitation forecasts. During the 2015–2017 monitoring period, this RS hybrid approach has an increase of true positives and a decrease of false positives and false negatives relative to the previous RA rainfall-only thresholds. We also demonstrate that alternative hybrid threshold formats could be even more accurate, which suggests that further development and testing during future landslide seasons is needed. The positive results confirm that accounting for antecedent wetness conditions with direct subsurface hydrologic measurements can improve thresholds for alert systems and early warning of rainfall-induced shallow landsliding.

  6. Real-time improvement of continuous glucose monitoring accuracy: the smart sensor concept.

    PubMed

    Facchinetti, Andrea; Sparacino, Giovanni; Guerra, Stefania; Luijf, Yoeri M; DeVries, J Hans; Mader, Julia K; Ellmerer, Martin; Benesch, Carsten; Heinemann, Lutz; Bruttomesso, Daniela; Avogaro, Angelo; Cobelli, Claudio

    2013-04-01

    Reliability of continuous glucose monitoring (CGM) sensors is key in several applications. In this work we demonstrate that real-time algorithms can render CGM sensors smarter by reducing their uncertainty and inaccuracy and improving their ability to alert for hypo- and hyperglycemic events. The smart CGM (sCGM) sensor concept consists of a commercial CGM sensor whose output enters three software modules, able to work in real time, for denoising, enhancement, and prediction. These three software modules were recently presented in the CGM literature, and here we apply them to the Dexcom SEVEN Plus continuous glucose monitor. We assessed the performance of the sCGM on data collected in two trials, each containing 12 patients with type 1 diabetes. The denoising module improves the smoothness of the CGM time series by an average of ∼57%, the enhancement module reduces the mean absolute relative difference from 15.1 to 10.3%, increases by 12.6% the pairs of values falling in the A-zone of the Clarke error grid, and finally, the prediction module forecasts hypo- and hyperglycemic events an average of 14 min ahead of time. We have introduced and implemented the sCGM sensor concept. Analysis of data from 24 patients demonstrates that incorporation of suitable real-time signal processing algorithms for denoising, enhancement, and prediction can significantly improve the performance of CGM applications. This can be of great clinical impact for hypo- and hyperglycemic alert generation as well in artificial pancreas devices.

  7. Using Copula Distributions to Support More Accurate Imaging-Based Diagnostic Classifiers for Neuropsychiatric Disorders

    PubMed Central

    Bansal, Ravi; Hao, Xuejun; Liu, Jun; Peterson, Bradley S.

    2014-01-01

    Many investigators have tried to apply machine learning techniques to magnetic resonance images (MRIs) of the brain in order to diagnose neuropsychiatric disorders. Usually the number of brain imaging measures (such as measures of cortical thickness and measures of local surface morphology) derived from the MRIs (i.e., their dimensionality) has been large (e.g. >10) relative to the number of participants who provide the MRI data (<100). Sparse data in a high dimensional space increases the variability of the classification rules that machine learning algorithms generate, thereby limiting the validity, reproducibility, and generalizability of those classifiers. The accuracy and stability of the classifiers can improve significantly if the multivariate distributions of the imaging measures can be estimated accurately. To accurately estimate the multivariate distributions using sparse data, we propose to estimate first the univariate distributions of imaging data and then combine them using a Copula to generate more accurate estimates of their multivariate distributions. We then sample the estimated Copula distributions to generate dense sets of imaging measures and use those measures to train classifiers. We hypothesize that the dense sets of brain imaging measures will generate classifiers that are stable to variations in brain imaging measures, thereby improving the reproducibility, validity, and generalizability of diagnostic classification algorithms in imaging datasets from clinical populations. In our experiments, we used both computer-generated and real-world brain imaging datasets to assess the accuracy of multivariate Copula distributions in estimating the corresponding multivariate distributions of real-world imaging data. Our experiments showed that diagnostic classifiers generated using imaging measures sampled from the Copula were significantly more accurate and more reproducible than were the classifiers generated using either the real-world imaging

  8. Accurate Mobile Urban Mapping via Digital Map-Based SLAM †

    PubMed Central

    Roh, Hyunchul; Jeong, Jinyong; Cho, Younggun; Kim, Ayoung

    2016-01-01

    This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS. PMID:27548175

  9. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  10. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  11. Time-accurate unsteady flow simulations supporting the SRM T+68-second pressure spike anomaly investigation (STS-54B)

    NASA Astrophysics Data System (ADS)

    Dougherty, N. S.; Burnette, D. W.; Holt, J. B.; Matienzo, Jose

    1993-07-01

    Time-accurate unsteady flow simulations are being performed supporting the SRM T+68sec pressure 'spike' anomaly investigation. The anomaly occurred in the RH SRM during the STS-54 flight (STS-54B) but not in the LH SRM (STS-54A) causing a momentary thrust mismatch approaching the allowable limit at that time into the flight. Full-motor internal flow simulations using the USA-2D axisymmetric code are in progress for the nominal propellant burn-back geometry and flow conditions at T+68-sec--Pc = 630 psi, gamma = 1.1381, T(sub c) = 6200 R, perfect gas without aluminum particulate. In a cooperative effort with other investigation team members, CFD-derived pressure loading on the NBR and castable inhibitors was used iteratively to obtain nominal deformed geometry of each inhibitor, and the deformed (bent back) inhibitor geometry was entered into this model. Deformed geometry was computed using structural finite-element models. A solution for the unsteady flow has been obtained for the nominal flow conditions (existing prior to the occurrence of the anomaly) showing sustained standing pressure oscillations at nominally 14.5 Hz in the motor IL acoustic mode that flight and static test data confirm to be normally present at this time. Average mass flow discharged from the nozzle was confirmed to be the nominal expected (9550 lbm/sec). The local inlet boundary condition is being perturbed at the location of the presumed reconstructed anomaly as identified by interior ballistics performance specialist team members. A time variation in local mass flow is used to simulate sudden increase in burning area due to localized propellant grain cracks. The solution will proceed to develop a pressure rise (proportional to total mass flow rate change squared). The volume-filling time constant (equivalent to 0.5 Hz) comes into play in shaping the rise rate of the developing pressure 'spike' as it propagates at the speed of sound in both directions to the motor head end and nozzle. The

  12. Improving Emergency Department Door to Doctor Time and Process Reliability

    PubMed Central

    El Sayed, Mazen J.; El-Eid, Ghada R.; Saliba, Miriam; Jabbour, Rima; Hitti, Eveline A.

    2015-01-01

    Abstract The aim of this study is to determine the effectiveness of using lean management methods on improving emergency department door to doctor times at a tertiary care hospital. We performed a before and after study at an academic urban emergency department with 49,000 annual visits after implementing a series of lean driven interventions over a 20 month period. The primary outcome was mean door to doctor time and the secondary outcome was length of stay of both admitted and discharged patients. A convenience sample from the preintervention phase (February 2012) was compared to another from the postintervention phase (mid-October to mid-November 2013). Individual control charts were used to assess process stability. Postintervention there was a statistically significant decrease in the mean door to doctor time measure (40.0 minutes ± 53.44 vs 25.3 minutes ± 15.93 P < 0.001). The postintervention process was more statistically in control with a drop in the upper control limits from 148.8 to 72.9 minutes. Length of stay of both admitted and discharged patients dropped from 2.6 to 2.0 hours and 9.0 to 5.5 hours, respectively. All other variables including emergency department visit daily volumes, hospital occupancy, and left without being seen rates were comparable. Using lean change management techniques can be effective in reducing door to doctor time in the Emergency Department and improving process reliability. PMID:26496278

  13. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  14. Epoch length to accurately estimate the amplitude of interference EMG is likely the result of unavoidable amplitude cancellation

    PubMed Central

    Keenan, Kevin G.; Valero-Cuevas, Francisco J.

    2008-01-01

    Researchers and clinicians routinely rely on interference electromyograms (EMGs) to estimate muscle forces and command signals in the neuromuscular system (e.g., amplitude, timing, and frequency content). The amplitude cancellation intrinsic to interference EMG, however, raises important questions about how to optimize these estimates. For example, what should the length of the epoch (time window) be to average an EMG signal to reliably estimate muscle forces and command signals? Shorter epochs are most practical, and significant reductions in epoch have been reported with high-pass filtering and whitening. Given that this processing attenuates power at frequencies of interest (< 250 Hz), however, it is unclear how it improves the extraction of physiologically-relevant information. We examined the influence of amplitude cancellation and high-pass filtering on the epoch necessary to accurately estimate the “true” average EMG amplitude calculated from a 28 s EMG trace (EMGref) during simulated constant isometric conditions. Monte Carlo iterations of a motor-unit model simulating 28 s of surface EMG produced 245 simulations under 2 conditions: with and without amplitude cancellation. For each simulation, we calculated the epoch necessary to generate average full-wave rectified EMG amplitudes that settled within 5% of EMGref. For the no-cancellation EMG, the necessary epochs were short (e.g., < 100 ms). For the more realistic interference EMG (i.e., cancellation condition), epochs shortened dramatically after using high-pass filter cutoffs above 250 Hz, producing epochs short enough to be practical (i.e., < 500 ms). We conclude that the need to use long epochs to accurately estimate EMG amplitude is likely the result of unavoidable amplitude cancellation, which helps to clarify why high-pass filtering (> 250 Hz) improves EMG estimates. PMID:19081815

  15. Lean-driven improvements slash wait times, drive up patient satisfaction scores.

    PubMed

    2012-07-01

    Administrators at LifePoint Hospitals, based in Brentwood, TN, used lean manufacturing techniques to slash wait times by as much as 30 minutes and achieve double-digit increases in patient satisfaction scores in the EDs at three hospitals. In each case, front-line workers took the lead on identifying opportunities for improvement and redesigning the patient-flow process. As a result of the new efficiencies, patient volume is up by about 25% at all three hospitals. At each hospital, the improvement process began with Kaizen, a lean process that involves bringing personnel together to flow-chart the current system, identify problem areas, and redesign the process. Improvement teams found big opportunities for improvement at the front end of the flow process. Key to the approach was having a plan up front to deal with non-compliance. To sustain improvements, administrators gather and disseminate key metrics on a daily basis.

  16. Improvements in Interval Time Tracking and Effects on Reading Achievement

    ERIC Educational Resources Information Center

    Taub, Gordon E.; McGrew, Kevin S.; Keith, Timothy Z.

    2007-01-01

    This study examined the effect of improvements in timing/rhythmicity on students' reading achievement. 86 participants completed pre- and post-test measures of reading achievement (i.e., Woodcock-Johnson III, Comprehensive Test of Phonological Processing, Test of Word Reading Efficiency, and Test of Silent Word Reading Fluency). Students in the…

  17. Measuring Quality Improvement in Acute Ischemic Stroke Care: Interrupted Time Series Analysis of Door-to-Needle Time.

    PubMed

    van Dishoeck, Anne Margreet; Dippel, Diederik W J; Dirks, Maaike; Looman, Caspar W N; Mackenbach, Johan P; Steyerberg, Ewout W

    2014-01-01

    In patients with acute ischemic stroke, early treatment with recombinant tissue plasminogen activator (rtPA) improves functional outcome by effectively reducing disability and dependency. Timely thrombolysis, within 1 h, is a vital aspect of acute stroke treatment, and is reflected in the widely used performance indicator 'door-to-needle time' (DNT). DNT measures the time from the moment the patient enters the emergency department until he/she receives intravenous rtPA. The purpose of the study was to measure quality improvement from the first implementation of thrombolysis in stroke patients in a university hospital in the Netherlands. We further aimed to identify specific interventions that affect DNT. We included all patients with acute ischemic stroke consecutively admitted to a large university hospital in the Netherlands between January 2006 and December 2012, and focused on those treated with thrombolytic therapy on admission. Data were collected routinely for research purposes and internal quality measurement (the Erasmus Stroke Study). We used a retrospective interrupted time series design to study the trend in DNT, analyzed by means of segmented regression. Between January 2006 and December 2012, 1,703 patients with ischemic stroke were admitted and 262 (17%) were treated with rtPA. Patients treated with thrombolysis were on average 63 years old at the time of the stroke and 52% were male. Mean age (p = 0.58) and sex distribution (p = 0.98) did not change over the years. The proportion treated with thrombolysis increased from 5% in 2006 to 22% in 2012. In 2006, none of the patients were treated within 1 h. In 2012, this had increased to 81%. In a logistic regression analysis, this trend was significant (OR 1.6 per year, CI 1.4-1.8). The median DNT was reduced from 75 min in 2006 to 45 min in 2012 (p < 0.001 in a linear regression model). In this period, a 12% annual decrease in DNT was achieved (CI from 16 to 8%). We could not find a significant

  18. An accurate laser radiometer for determining visible exposure times.

    PubMed

    Royston, D D

    1985-01-01

    A laser light radiometer has been developed for the Electro-Optics Branch of the Center for Devices and Radiological Health (CDRH). The radiometer measures direct laser radiation emitted in the visible spectrum. Based upon this measurement, the instrument's microprocessor automatically determines at what time duration the exposure to the measured laser radiation would exceed either the class I accessible emission limits of the Federal Performance Standard for laser products or the maximum permissible exposure limits of laser user safety standards. The instrument also features automatic background level compensation, pulse measurement capability, and self-diagnosis. Measurement of forward surface illumination levels preceding HpD photoradiation therapy is possible.

  19. Improving Mid-Course Flight Through an Application of Real-Time Optimal Control

    DTIC Science & Technology

    2017-12-01

    COURSE FLIGHT THROUGH AN APPLICATION OF REAL- TIME OPTIMAL CONTROL by Mark R. Roncoroni December 2017 Thesis Advisor: Ronald Proulx Co...collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources...AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVING MID-COURSE FLIGHT THROUGH AN APPLICATION OF REAL- TIME OPTIMAL CONTROL 5. FUNDING

  20. Diurnal patterns of salivary cortisol and DHEA using a novel collection device: Electronic monitoring confirms accurate recording of collection time using this device

    PubMed Central

    Laudenslager, Mark L.; Calderone, Jacqueline; Philips, Sam; Natvig, Crystal; Carlson, Nichole E.

    2013-01-01

    The accurate indication of saliva collection time is important for defining the diurnal decline in salivary cortisol as well as characterizing the cortisol awakening response.. We tested a convenient and novel collection device for collecting saliva on strips of filter paper in a specially constructed booklet for determination of both cortisol and DHEA. In the present study, 31 healthy adults (mean age 43.5 yrs.) collected saliva samples four times a day on three consecutive days using filter paper collection devices (Saliva Procurement and Integrated Testing (SPIT) booklet) which were maintained during the collection period in a large plastic bottle with an electronic monitoring cap. Subjects were asked to collect saliva samples at awakening, 30 min. after awakening, before lunch and 600 min. after awakening. The time of awakening and the time of collection before lunch were allowed to vary by each subjects’ schedule. A reliable relationship was observed between the time recorded by the subject directly on the booklet and the time recorded by electronic collection device (n = 286 observations; r2 = 0.98). However, subjects did not consistently collect the saliva samples at the two specific times requested, 30 and 600 min. after awakening. Both cortisol and DHEA revealed diurnal declines.. In spite of variance in collection times at 30 min. and 600 min. after awakening, the slope of the diurnal decline in both salivary cortisol and DHEA were similar when we compared collection tolerances of ± 7.5 and ± 15 min. for each steroid.. These unique collection booklets proved to be a reliable method for recording collection times by subjects as well as for estimating diurnal salivary cortisol and DHEA patterns. PMID:23490073

  1. Eldecalcitol improves chair-rising time in postmenopausal osteoporotic women treated with bisphosphonates

    PubMed Central

    Iwamoto, Jun; Sato, Yoshihiro

    2014-01-01

    An open-label randomized controlled trial was conducted to clarify the effect of eldecalcitol (ED) on body balance and muscle power in postmenopausal osteoporotic women treated with bisphosphonates. A total of 106 postmenopausal women with osteoporosis (mean age 70.8 years) were randomly divided into two groups (n=53 in each group): a bisphosphonate group (control group) and a bisphosphonate plus ED group (ED group). Biochemical markers, unipedal standing time (body balance), and five-repetition chair-rising time (muscle power) were evaluated. The duration of the study was 6 months. Ninety-six women who completed the trial were included in the subsequent analyses. At baseline, the age, body mass index, bone mass indices, bone turnover markers, unipedal standing time, and chair-rising time did not differ significantly between the two groups. During the 6-month treatment period, bone turnover markers decreased significantly from the baseline values similarly in the two groups. Although no significant improvement in the unipedal standing time was seen in the ED group, compared with the control group, the chair-rising time decreased significantly in the ED group compared with the control group. The present study showed that ED improved the chair-rising time in terms of muscle power in postmenopausal osteoporotic women treated with bisphosphonates. PMID:24476669

  2. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  3. Real-Time Captioning for Improving Informed Consent: Patient and Physician Benefits.

    PubMed

    Spehar, Brent; Tye-Murray, Nancy; Myerson, Joel; Murray, David J

    2016-01-01

    New methods are needed to improve physicians' skill in communicating information and to enhance patients' ability to recall that information. We evaluated a real-time speech-to-text captioning system that simultaneously provided a speech-to-text record for both patient and anesthesiologist. The goals of the study were to assess hearing-impaired patients' recall of an informed consent discussion about regional anesthesia using real-time captioning and to determine whether the physicians found the system useful for monitoring their own performance. We recorded 2 simulated informed consent encounters with hearing-impaired older adults, in which physicians described regional anesthetic procedures. The conversations were conducted with and without real-time captioning. Subsequently, the patient participants, who wore their hearing aids throughout, were tested on the material presented, and video recordings of the encounters were analyzed to determine how effectively physicians communicated with and without the captioning system. The anesthesiology residents provided similar information to the patient participants regardless of whether the real-time captioning system was used. Although the patients retained relatively few details regardless of the informed consent discussion, they could recall significantly more of the key points when provided with real-time captioning. Real-time speech-to-text captioning improved recall in hearing-impaired patients and proved useful for determining the information provided during an informed consent encounter. Real-time speech-to-text captioning could provide a method for assessing physicians' communication that could be used both for self-assessment and as an evaluative approach to training communication skills in practice settings.

  4. An improvement of the measurement of time series irreversibility with visibility graph approach

    NASA Astrophysics Data System (ADS)

    Wu, Zhenyu; Shang, Pengjian; Xiong, Hui

    2018-07-01

    We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.

  5. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  6. Homomorphic Filtering for Improving Time Synchronization in Wireless Networks

    PubMed Central

    Castillo-Secilla, José María; Palomares, José Manuel; León, Fernando; Olivares, Joaquín

    2017-01-01

    Wireless sensor networks are used to sample the environment in a distributed way. Therefore, it is mandatory for all of the measurements to be tightly synchronized in order to guarantee that every sensor is sampling the environment at the exact same instant of time. The synchronization drift gets bigger in environments suffering from temperature variations. Thus, this work is focused on improving time synchronization under deployments with temperature variations. The working hypothesis demonstrated in this work is that the clock skew of two nodes (the ratio of the real frequencies of the oscillators) is composed of a multiplicative combination of two main components: the clock skew due to the variations between the cut of the crystal of each oscillator and the clock skew due to the different temperatures affecting the nodes. By applying a nonlinear filtering, the homomorphic filtering, both components are separated in an effective way. A correction factor based on temperature, which can be applied to any synchronization protocol, is proposed. For testing it, an improvement of the FTSP synchronization protocol has been developed and physically tested under temperature variation scenarios using TelosB motes flashed with the IEEE 802.15.4 implementation supplied by TinyOS. PMID:28425955

  7. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    NASA Astrophysics Data System (ADS)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  8. Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease.

    PubMed

    Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A

    2014-01-01

    It is well established that auditory cueing improves gait in patients with idiopathic Parkinson's disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients' performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.

  9. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Improvement of attenuation correction in time-of-flight PET/MR imaging with a positron-emitting source.

    PubMed

    Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan

    2014-02-01

    Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus

  11. The freckle plot (daily turnaround time chart): a technique for timely and effective quality improvement of test turnaround times.

    PubMed

    Pellar, T G; Ward, P J; Tuckerman, J F; Henderson, A R

    1993-06-01

    Test turnaround times are often monitored on a monthly basis. However, such an interval usually means that not all causes for delay in test reporting can be unequivocally identified for institution of remedial action. We have devised a daily chart--the freckle plot--that graphically displays the test turnaround times by laboratory receipt time. Different symbols are used to designate specimens reported within the test's turnaround time limit, those within 10 min beyond that limit, and those well outside the limit. These categories are adjustable to suit different limits of stringency. Freckle plots are produced on a daily basis and can be used to track down causes for test delays. Using the 1-h turnaround time "stat" potassium test as a model, we found 16 causes for test delay, of which 9 were potentially remediable. By applying these remedies, we were able to increase test compliance, in the day shift, from 91.5% (95% confidence interval 88.8%-93.7%) to 97.6% (95% confidence interval 96.4-98.55%), which is significant at P < 10(-7). This daily plot is a useful quality assurance tool, supplementing the more conventional tests used to ensure laboratory quality improvement.

  12. "It's about Improving My Practice": The Learner Experience of Real-Time Coaching

    ERIC Educational Resources Information Center

    Sharplin, Erica J.; Stahl, Garth; Kehrwald, Ben

    2016-01-01

    This article reports on pre-service teachers' experience of the Real-Time Coaching model, an innovative technology-based approach to teacher training. The Real-Time Coaching model uses multiple feedback cycles via wireless technology to develop within pre-service teachers the specific skills and mindset toward continual improvement. Results of…

  13. Facilitating the Timely Discharge of Well Newborns by Using Quality Improvement Methods.

    PubMed

    Rochester, Nicole T; Banach, Laurie P; Hoffner, Wendy; Zeltser, Deena; Lewis, Phyllis; Seelbach, Elizabeth; Cuzzi, Sandra

    2018-05-01

    Discharges are a key driver of hospital throughput. Our pediatric hospitalist team sought to improve newborn nursery throughput by increasing the percentage of newborns on our service with a discharge order by 11 am. We hypothesized that implementing a discharge checklist would result in earlier discharge times for newborns who met discharge criteria. We identified barriers to timely discharge through focus groups with key stakeholders, chart reviews, and brainstorming sessions. We subsequently created and implemented a discharge checklist to identify and address barriers before daily rounds. We tracked mean monthly discharge order times. Finally, we performed chart reviews to determine causes for significantly delayed discharge orders and used this information to modify rounding practices during a second plan-do-study-act cycle. During the 2-year period before the intervention, 24% of 3224 newborns had a discharge order entered by 11 am. In the 20 months after the intervention, 39% of 2739 newborns had a discharge order by 11 am, a 63% increase compared with the baseline. Observation for group B Streptococcus exposure was the most frequent reason for a late discharge order. There are many factors that affect the timely discharge of well newborns. The development and implementation of a discharge checklist improved our ability to discharge newborns on our pediatric hospitalist service by 11 am. Future studies to identify nonphysician barriers to timely newborn discharges may lead to further improvements in throughput between the labor and delivery and maternity suites units. Copyright © 2018 by the American Academy of Pediatrics.

  14. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    USDA-ARS?s Scientific Manuscript database

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  15. Accurate computation of survival statistics in genome-wide studies.

    PubMed

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli

    2015-05-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.

  16. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  17. Learning fast accurate movements requires intact frontostriatal circuits

    PubMed Central

    Shabbott, Britne; Ravindran, Roshni; Schumacher, Joseph W.; Wasserman, Paula B.; Marder, Karen S.; Mazzoni, Pietro

    2013-01-01

    The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound), and difficulties in distinguishing learning deficits from execution impairments (performance confound). We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements. We addressed the definition and performance confounds by: (1) focusing on an operationally defined core element of motor skill learning (speed-accuracy learning), and (2) using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington's disease (HD), a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate) than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning. PMID:24312037

  18. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  19. Radio interferometric measurements for accurate planetary orbiter navigation

    NASA Technical Reports Server (NTRS)

    Poole, S. R.; Ananda, M.; Hildebrand, C. E.

    1979-01-01

    The use of narrowband delta-VLBI to achieve accurate orbit determination is presented by viewing a spacecraft from widely separated stations followed by viewing a nearby quasar from the same stations. Current analysis is examined that establishes the orbit determination accuracy achieved with data arcs spanning up to 3.5 d. Strategies for improving prediction accuracy are given, and the performance of delta-VLBI is compared with conventional radiometric tracking data. It is found that accuracy 'within the fit' is on the order of 0.5 km for data arcs having delta-VLBI on the ends of the arcs and for arc lengths varying from one baseline to 3.5 d. The technique is discussed with reference to the proposed Venus Orbiting Imaging Radar mission.

  20. An improved real time superresolution FPGA system

    NASA Astrophysics Data System (ADS)

    Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj

    2009-05-01

    In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

  1. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  2. How Changes in White Matter Might Underlie Improved Reaction Time Due to Practice1

    PubMed Central

    Voelker, Pascale; Piscopo, Denise; Weible, Aldis; Lynch, Gary; Rothbart, Mary K.; Posner, Michael I.; Niell, Cristopher M.

    2017-01-01

    Why does training on a task reduce the reaction time for performing it? New research points to changes in white matter pathways as one likely mechanism. These pathways connect remote brain areas involved in performing the task. Genetic variations may be involved in individual differences in the extent of this improvement. If white matter change is involved in improved reaction time with training, it may point the way toward understanding where and how generalization occurs. We examine the hypothesis that brain pathways shared by different tasks may result in improved performance of cognitive tasks remote from the training. PMID:27064751

  3. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo

    2016-06-13

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and vanmore » der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.« less

  4. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    PubMed

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  5. Mass-improvement of the vector current in three-flavor QCD

    NASA Astrophysics Data System (ADS)

    Fritzsch, P.

    2018-06-01

    We determine two improvement coefficients which are relevant to cancel mass-dependent cutoff effects in correlation functions with operator insertions of the non-singlet local QCD vector current. This determination is based on degenerate three-flavor QCD simulations of non-perturbatively O( a) improved Wilson fermions with tree-level improved gauge action. Employing a very robust strategy that has been pioneered in the quenched approximation leads to an accurate estimate of a counterterm cancelling dynamical quark cutoff effects linear in the trace of the quark mass matrix. To our knowledge this is the first time that such an effect has been determined systematically with large significance.

  6. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eosaccurately assigned only 41% of samples. Conclusion Despite statistically significant associations FeNO, IgE, blood Eos and Neu, FEV1%predicted, and age are poor surrogates, separately and combined, for accurately predicting sputum eosinophils and neutrophils. PMID:23706399

  7. Rapid and accurate estimation of release conditions in the javelin throw.

    PubMed

    Hubbard, M; Alaways, L W

    1989-01-01

    We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.

  8. [Improving the CMP appointment waiting time for children and adolescents].

    PubMed

    Cani, Pascale

    2014-01-01

    The increasing activity of mental health centres for children and adolescents and longer waiting times in obtaining a first appointment have led an area of child psychiatry to question the organisation of new consultation applications. Two CMP in the sector had a waiting period of over 40 days for half of the patients. Two improvement actions were implemented:the implementation of organisation and reception nurses and the development of a new applications management process. The evaluation after one year showed a decrease of half of the appointment waiting time without changing the non showed up rate.

  9. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  10. Diurnal patterns of salivary cortisol and DHEA using a novel collection device: electronic monitoring confirms accurate recording of collection time using this device.

    PubMed

    Laudenslager, Mark L; Calderone, Jacqueline; Philips, Sam; Natvig, Crystal; Carlson, Nichole E

    2013-09-01

    The accurate indication of saliva collection time is important for defining the diurnal decline in salivary cortisol as well as characterizing the cortisol awakening response. We tested a convenient and novel collection device for collecting saliva on strips of filter paper in a specially constructed booklet for determination of both cortisol and DHEA. In the present study, 31 healthy adults (mean age 43.5 years) collected saliva samples four times a day on three consecutive days using filter paper collection devices (Saliva Procurement and Integrated Testing (SPIT) booklet) which were maintained during the collection period in a large plastic bottle with an electronic monitoring cap. Subjects were asked to collect saliva samples at awakening, 30 min after awakening, before lunch and 600 min after awakening. The time of awakening and the time of collection before lunch were allowed to vary by each subjects' schedule. A reliable relationship was observed between the time recorded by the subject directly on the booklet and the time recorded by electronic collection device (n=286 observations; r(2)=0.98). However, subjects did not consistently collect the saliva samples at the two specific times requested, 30 and 600 min after awakening. Both cortisol and DHEA revealed diurnal declines. In spite of variance in collection times at 30 min and 600 min after awakening, the slope of the diurnal decline in both salivary cortisol and DHEA was similar when we compared collection tolerances of ±7.5 and ±15 min for each steroid. These unique collection booklets proved to be a reliable method for recording collection times by subjects as well as for estimating diurnal salivary cortisol and DHEA patterns. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Point of Care Ultrasound Accurately Distinguishes Inflammatory from Noninflammatory Disease in Patients Presenting with Abdominal Pain and Diarrhea

    PubMed Central

    Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo

    2016-01-01

    Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, accurate, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. Results. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can accurately be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838

  12. Improving medical imaging report turnaround times: the role of technolgy.

    PubMed

    Marquez, Luis O; Stewart, Howard

    2005-01-01

    At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department.

  13. Improving first case start times using Lean in an academic medical center.

    PubMed

    Deldar, Romina; Soleimani, Tahereh; Harmon, Carol; Stevens, Larry H; Sood, Rajiv; Tholpady, Sunil S; Chu, Michael W

    2017-06-01

    Lean is a process improvement strategy that can improve efficiency of the perioperative process. The purpose of this study was to identify etiologies of late surgery start times, implement Lean interventions, and analyze their effects. A retrospective review of all first-start surgery cases was performed. Lean was implemented in May 2015, and cases 7 months before and after implementation were analyzed. A total of 4,492 first-start cases were included; 2,181 were pre-Lean and 2,311 were post-Lean. The post-Lean group had significantly higher on-time starts than the pre-Lean group (69.0% vs 57.0%, P < .01). The most common delay etiology was surgeon-related for both groups. Delayed post-Lean cases were significantly less likely to be due to preoperative assessment (14.9% vs 9.9%, P < .01) and more likely due to patient-related (16.5% vs 22.3%, P < .01) or chaplain (1.8% vs 4.0%, P < .01) factors. Delayed starts occurred more often on snowy and cold days, and less often on didactic days (P < .01). Modifying preoperative tasks using Lean methods can improve operating room efficiency and increase on-time starts. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Device to improve detection in electro-chromatography

    DOEpatents

    Garguilo, Michael G.; Paul, Phillip H.; Rakestraw, David J.

    2000-01-01

    Apparatus and method for improving the resolution of non-pressure driven capillary chromatographic systems, and particularly for capillary electrochromatography (CEC) systems. By reducing the cross-sectional area of a packed capillary column by means of a second open capillary contiguous with the outlet end of a packed capillary column, where the packed capillary column has a cross sectional area of between about 2 and 5 times that of the open capillary column, the phenomenon of band broadening in the transition region between the open capillary and the packed capillary column, where the individual components of the mixture are analyzed, can be eliminated, thereby providing for a significant improvement in resolution and more accurate detection and analysis.

  15. Device to improve detection in electro-chromatography

    DOEpatents

    Garguilo, Michael G.; Paul, Phillip H.; Rakestraw, David J.

    2002-01-01

    Apparatus and method for improving the resolution of non-pressure driven capillary chromatographic systems, and particularly for capillary electrochromatography (CEC) systems. By reducing the cross-sectional area of a packed capillary column by means of a second open capillary contiguous with the outlet end of a packed capillary column, where the packed capillary column has a cross sectional area of between about 2 and 5 times that of the open capillary column, the phenomenon of band broadening in the transition region between the open capillary and the packed capillary column, where the individual components of the mixture are analyzed, can be eliminated, thereby providing for a significant improvement in resolution and more accurate detection and analysis.

  16. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  17. Improvement of laboratory turnaround time using lean methodology.

    PubMed

    Gupta, Shradha; Kapil, Sahil; Sharma, Monica

    2018-05-14

    Purpose The purpose of this paper is to discuss the implementation of lean methodology to reduce the turnaround time (TAT) of a clinical laboratory in a super speciality hospital. Delays in report delivery lead to delayed diagnosis increased waiting time and decreased customer satisfaction. The reduction in TAT will lead to increased patient satisfaction, quality of care, employee satisfaction and ultimately the hospital's revenue. Design/methodology/approach The generic causes resulting in increasing TAT of clinical laboratories were identified using lean tools and techniques such as value stream mapping (VSM), Gemba, Pareto Analysis and Root Cause Analysis. VSM was used as a tool to analyze the current state of the process and further VSM was used to design the future state with suggestions for process improvements. Findings This study identified 12 major non-value added factors for the hematology laboratory and 5 major non-value added factors for the biochemistry lab which were acting as bottlenecks resulting in limiting throughput. A four-month research study by the authors together with hospital quality department and laboratory staff members led to reduction of the average TAT from 180 to 95minutes in the hematology lab and from 268 to 208 minutes in the biochemistry lab. Practical implications Very few improvement initiatives in Indian healthcare are based on industrial engineering tools and techniques, which might be due to a lack of interaction between healthcare and engineering. The study provides a positive outcome in terms of improving the efficiency of services in hospitals and identifies a scope for lean in the Indian healthcare sector. Social implications Applying lean in the Indian healthcare sector gives its own potential solution to the problem caused, due to a wide gap between lean accessibility and lean implementation. Lean helped in changing the mindset of an organization toward providing the highest quality of services with faster delivery at

  18. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  19. Improving BeiDou real-time precise point positioning with numerical weather models

    NASA Astrophysics Data System (ADS)

    Lu, Cuixian; Li, Xingxing; Zus, Florian; Heinkelmann, Robert; Dick, Galina; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2017-09-01

    Precise positioning with the current Chinese BeiDou Navigation Satellite System is proven to be of comparable accuracy to the Global Positioning System, which is at centimeter level for the horizontal components and sub-decimeter level for the vertical component. But the BeiDou precise point positioning (PPP) shows its limitation in requiring a relatively long convergence time. In this study, we develop a numerical weather model (NWM) augmented PPP processing algorithm to improve BeiDou precise positioning. Tropospheric delay parameters, i.e., zenith delays, mapping functions, and horizontal delay gradients, derived from short-range forecasts from the Global Forecast System of the National Centers for Environmental Prediction (NCEP) are applied into BeiDou real-time PPP. Observational data from stations that are capable of tracking the BeiDou constellation from the International GNSS Service (IGS) Multi-GNSS Experiments network are processed, with the introduced NWM-augmented PPP and the standard PPP processing. The accuracy of tropospheric delays derived from NCEP is assessed against with the IGS final tropospheric delay products. The positioning results show that an improvement in convergence time up to 60.0 and 66.7% for the east and vertical components, respectively, can be achieved with the NWM-augmented PPP solution compared to the standard PPP solutions, while only slight improvement in the solution convergence can be found for the north component. A positioning accuracy of 5.7 and 5.9 cm for the east component is achieved with the standard PPP that estimates gradients and the one that estimates no gradients, respectively, in comparison to 3.5 cm of the NWM-augmented PPP, showing an improvement of 38.6 and 40.1%. Compared to the accuracy of 3.7 and 4.1 cm for the north component derived from the two standard PPP solutions, the one of the NWM-augmented PPP solution is improved to 2.0 cm, by about 45.9 and 51.2%. The positioning accuracy for the up component

  20. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  1. Improving GNSS time series for volcano monitoring: application to Canary Islands (Spain)

    NASA Astrophysics Data System (ADS)

    García-Cañada, Laura; Sevilla, Miguel J.; Pereda de Pablo, Jorge; Domínguez Cerdeña, Itahiza

    2017-04-01

    The number of permanent GNSS stations has increased significantly in recent years for different geodetic applications such as volcano monitoring, which require a high precision. Recently we have started to have coordinates time series long enough so that we can apply different analysis and filters that allow us to improve the GNSS coordinates results. Following this idea we have processed data from GNSS permanent stations used by the Spanish Instituto Geográfico Nacional (IGN) for volcano monitoring in Canary Islands to obtained time series by double difference processing method with Bernese v5.0 for the period 2007-2014. We have identified the characteristics of these time series and obtained models to estimate velocities with greater accuracy and more realistic uncertainties. In order to improve the results we have used two kinds of filters to improve the time series. The first, a spatial filter, has been computed using the series of residuals of all stations in the Canary Islands without an anomalous behaviour after removing a linear trend. This allows us to apply this filter to all sets of coordinates of the permanent stations reducing their dispersion. The second filter takes account of the temporal correlation in the coordinate time series for each station individually. A research about the evolution of the velocity depending on the series length has been carried out and it has demonstrated the need for using time series of at least four years. Therefore, in those stations with more than four years of data, we calculated the velocity and the characteristic parameters in order to have time series of residuals. This methodology has been applied to the GNSS data network in El Hierro (Canary Islands) during the 2011-2012 eruption and the subsequent magmatic intrusions (2012-2014). The results show that in the new series it is easier to detect anomalous behaviours in the coordinates, so they are most useful to detect crustal deformations in volcano monitoring.

  2. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  3. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  4. Time Course of Improvements in Power Characteristics in Elite Development Netball Players Entering a Full-Time Training Program.

    PubMed

    McKeown, Ian; Chapman, Dale W; Taylor, Kristie Lee; Ball, Nick B

    2016-05-01

    We describe the time course of adaptation to structured resistance training on entering a full-time high-performance sport program. Twelve international caliber female netballers (aged 19.9 ± 0.4 years) were monitored for 18 weeks with countermovement (CMJ: performed with body weight and 15 kg) and drop jumps (0.35-m box at body weight) at the start of each training week. Performance did not improve linearly or concurrently with loaded CMJ power improving 11% by Week 5 (effect size [ES] 0.93 ± 0.72) in contrast, substantial positive changes were observed for unloaded CMJ power (12%; ES 0.78 ± 0.39), and CMJ velocity (unloaded: 7.1%; ES 0.66 ± 0.34; loaded: 7.5%; ES 0.90 ± 0.41) by week 7. Over the investigation duration, large improvements were observed in unloaded CMJ power (24%; ES 1.45 ± 1.11) and velocity (12%; ES 1.13 ± 0.76). Loaded CMJ power also showed a large improvement (19%; ES 1.49 ± 0.97) but only moderate changes were observed for loaded CMJ velocity (8.4%; ES 1.01 ± 0.67). Jump height changes in either unloaded or loaded CMJ were unclear over the 18-week period. Drop jump performance improved throughout the investigation period with moderate positive changes in reactive strength index observed (35%; ES 0.97 ± 0.69). The adaptation response to a structured resistance training program does not occur linearly in young female athletes. Caution should be taken if assessing jump height only, as this will provide a biased observation to a training response. Frequently assessing CMJ performance can aid program design coaching decisions to ensure improvements are seen past the initial neuromuscular learning phase in performance training.

  5. Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.

    2012-01-01

    The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.

  6. Improving program documentation quality through the application of continuous improvement processes.

    PubMed

    Lovlien, Cheryl A; Johansen, Martha; Timm, Sandra; Eversman, Shari; Gusa, Dorothy; Twedell, Diane

    2007-01-01

    Maintaining the integrity of record keeping and retrievable information related to the provision of continuing education credit creates challenges for a large organization. Accurate educational program documentation is vital to support the knowledge and professional development of nursing staff. Quality review and accurate documentation of programs for nursing staff development occurred at one institution through the use of continuous improvement principles. Integration of the new process into the current system maintains the process of providing quality record keeping.

  7. Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses

    PubMed Central

    Myers, Risa B.; Herskovic, Jorge R.

    2011-01-01

    Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our

  8. Frequency accurate coherent electro-optic dual-comb spectroscopy in real-time.

    PubMed

    Martín-Mateos, Pedro; Jerez, Borja; Largo-Izquierdo, Pedro; Acedo, Pablo

    2018-04-16

    Electro-optic dual-comb spectrometers have proved to be a promising technology for sensitive, high-resolution and rapid spectral measurements. Electro-optic combs possess very attractive features like simplicity, reliability, bright optical teeth, and typically moderate but quickly tunable optical spans. Furthermore, in a dual-comb arrangement, narrowband electro-optic combs are generated with a level of mutual coherence that is sufficiently high to enable optical multiheterodyning without inter-comb stabilization or signal processing systems. However, this valuable tool still presents several limitations; for instance, on most systems, absolute frequency accuracy and long-term stability cannot be guaranteed; likewise, interferometer-induced phase noise restricts coherence time and limits the attainable signal-to-noise ratio. In this paper, we address these drawbacks and demonstrate a cost-efficient absolute electro-optic dual-comb instrument based on a frequency stabilization mechanism and a novel adaptive interferogram acquisition approach devised for electro-optic dual-combs capable of operating in real-time. The spectrometer, completely built from commercial components, provides sub-ppm frequency uncertainties and enables a signal-to-noise ratio of 10000 (intensity noise) in 30 seconds of integration time.

  9. Improvements in brain activation detection using time-resolved diffuse optical means

    NASA Astrophysics Data System (ADS)

    Montcel, Bruno; Chabrier, Renee; Poulet, Patrick

    2005-08-01

    An experimental method based on time-resolved absorbance difference is described. The absorbance difference is calculated over each temporal step of the optical signal with the time-resolved Beer-Lambert law. Finite element simulations show that each step corresponds to a different scanned zone and that cerebral contribution increases with the arrival time of photons. Experiments are conducted at 690 and 830 nm with a time-resolved system consisting of picosecond laser diodes, micro-channel plate photo-multiplier tube and photon counting modules. The hemodynamic response to a short finger tapping stimulus is measured over the motor cortex. Time-resolved absorbance difference maps show that variations in the optical signals are not localized in superficial regions of the head, which testify for their cerebral origin. Furthermore improvements in the detection of cerebral activation is achieved through the increase of variations in absorbance by a factor of almost 5 for time-resolved measurements as compared to non-time-resolved measurements.

  10. Late Miocene climate and time scale reconciliation: Accurate orbital calibration from a deep-sea perspective

    NASA Astrophysics Data System (ADS)

    Drury, Anna Joy; Westerhold, Thomas; Frederichs, Thomas; Tian, Jun; Wilkens, Roy; Channell, James E. T.; Evans, Helen; John, Cédric M.; Lyle, Mitch; Röhl, Ursula

    2017-10-01

    Accurate age control of the late Tortonian to early Messinian (8.3-6.0 Ma) is essential to ascertain the origin of benthic foraminiferal δ18O trends and the late Miocene carbon isotope shift (LMCIS), and to examine temporal relationships between the deep-sea, terrasphere and cryosphere. The current Tortonian-Messinian Geological Time Scale (GTS2012) is based on astronomically calibrated Mediterranean sections; however, no comparable non-Mediterranean stratigraphies exist for 8-6 Ma suitable for testing the GTS2012. Here, we present the first high-resolution, astronomically tuned benthic stable isotope stratigraphy (1.5 kyr resolution) and magnetostratigraphy from a single deep-sea location (IODP Site U1337, equatorial Pacific Ocean), which provides unprecedented insight into climate evolution from 8.3-6.0 Ma. The astronomically calibrated magnetostratigraphy provides robust ages, which differ by 2-50 kyr relative to the GTS2012 for polarity Chrons C3An.1n to C4r.1r, and eliminates the exceptionally high South Atlantic spreading rates based on the GTS2012 during Chron C3Bn. We show that the LMCIS was globally synchronous within 2 kyr, and provide astronomically calibrated ages anchored to the GPTS for its onset (7.537 Ma; 50% from base Chron C4n.1n) and termination (6.727 Ma; 11% from base Chron C3An.2n), confirming that the terrestrial C3:C4 shift could not have driven the LMCIS. The benthic records show that the transition into the 41-kyr world, when obliquity strongly influenced climate variability, already occurred at 7.7 Ma and further strengthened at 6.4 Ma. Previously unseen, distinctive, asymmetric saw-tooth patterns in benthic δ18O imply that high-latitude forcing played an important role in late Miocene climate dynamics from 7.7-6.9 Ma. This new integrated deep-sea stratigraphy from Site U1337 can act as a new stable isotope and magnetic polarity reference section for the 8.3-6.0 Ma interval.

  11. Parturition prediction and timing of canine pregnancy

    PubMed Central

    Kim, YeunHee; Travis, Alexander J.; Meyers-Wallen, Vicki N.

    2007-01-01

    An accurate method of predicting the date of parturition in the bitch is clinically useful to minimize or prevent reproductive losses by timely intervention. Similarly, an accurate method of timing canine ovulation and gestation is critical for development of assisted reproductive technologies, e.g. estrous synchronization and embryo transfer. This review discusses present methods for accurately timing canine gestational age and outlines their use in clinical management of high-risk pregnancies and embryo transfer research. PMID:17904630

  12. Clinical audit and quality improvement - time for a rethink?

    PubMed

    Bowie, Paul; Bradley, Nicholas A; Rushmer, Rosemary

    2012-02-01

    Evidence of the benefits of clinical audit to patient care is limited, despite its longevity. Additionally, numerous attitudinal, professional and organizational barriers impede its effectiveness. Yet, audit remains a favoured quality improvement (QI) policy lever. Growing interest in QI techniques suggest it is timely to re-examine audit. Clinical audit advisors assist health care teams, so hold unique cross-cutting perspectives on the strategic and practical application of audit in NHS organizations. We aimed to explore their views and experiences of their role in supporting health care teams in the audit process. Qualitative study using semi-structured and focus group interviews. Participants were purposively sampled (n = 21) across health sectors in two large Scottish NHS Boards. Interviews were audio-taped, transcribed and a thematic analysis performed. Work pressure and lack of protected time were cited as audit barriers, but these hide other reasons for non-engagement. Different professions experience varying opportunities to participate. Doctors have more opportunities and may dominate or frustrate the process. Audit is perceived as a time-consuming, additional chore and a managerially driven exercise with no associated professional rewards. Management failure to support and resource changes fuels low motivation and disillusionment. Audit is regarded as a 'political' tool stifled by inter-professional differences and contextual constraints. The findings echo previous studies. We found limited evidence that audit as presently defined and used is meeting policy makers' aspirations. The quality and safety improvement focus is shifting towards 'alternative' systems-based QI methods, but research to suggest that these will be any more impactful is also lacking. Additionally, identified professional, educational and organizational barriers still need to be overcome. A debate on how best to overcome the limitations of audit and its place alongside other approaches

  13. Accurate documentation in cultural heritage by merging TLS and high-resolution photogrammetric data

    NASA Astrophysics Data System (ADS)

    Grussenmeyer, Pierre; Alby, Emmanuel; Assali, Pierre; Poitevin, Valentin; Hullo, Jean-François; Smigiel, Eddie

    2011-07-01

    Several recording techniques are used together in Cultural Heritage Documentation projects. The main purpose of the documentation and conservation works is usually to generate geometric and photorealistic 3D models for both accurate reconstruction and visualization purposes. The recording approach discussed in this paper is based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons, and criteria as geometry, texture, accuracy, resolution, recording and processing time are often compared. TLS techniques (time of flight or phase shift systems) are often used for the recording of large and complex objects or sites. Point cloud generation from images by dense stereo or multi-image matching can be used as an alternative or a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one as the acquisition system is limited to a digital camera and a few accessories only. Indeed, the stereo matching process offers a cheap, flexible and accurate solution to get 3D point clouds and textured models. The calibration of the camera allows the processing of distortion free images, accurate orientation of the images, and matching at the subpixel level. The main advantage of this photogrammetric methodology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After the matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but with really better raster information for textures. The paper will address the automation of recording and processing steps, the assessment of the results, and the deliverables (e.g. PDF-3D files). Visualization aspects of the final 3D models are presented. Two case studies with merged photogrammetric and TLS data are finally presented: - The Gallo-roman Theatre of Mandeure, France); - The

  14. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  16. Utilizing a Human Factors Nursing Worksystem Improvement Framework to Increase Nurses' Time at the Bedside and Enhance Safety.

    PubMed

    Probst, C Adam; Carter, Megan; Cadigan, Caton; Dalcour, Cortney; Cassity, Cindy; Quinn, Penny; Williams, Tiana; Montgomery, Donna Cook; Wilder, Claudia; Xiao, Yan

    2017-02-01

    The aim of this study is to increase nurses' time for direct patient care and improve safety via a novel human factors framework for nursing worksystem improvement. Time available for direct patient care influences outcomes, yet worksystem barriers prevent nurses adequate time at the bedside. A novel human factors framework was developed for worksystem improvement in 3 units at 2 facilities. Objectives included improving nurse efficiency as measured by time-and-motion studies, reducing missing medications and subsequent trips to medication rooms and improving medication safety. Worksystem improvement resulted in time savings of 16% to 32% per nurse per 12-hour shift. Requests for missing medications dropped from 3.2 to 1.3 per day. Nurse medication room trips were reduced by 30% and nurse-reported medication errors fell from a range of 1.2 to 0.8 and 6.3 to 4.0 per month. An innovative human factors framework for nursing worksystem improvement provided practical and high priority targets for interventions that significantly improved the nursing worksystem.

  17. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  18. Calculating accurate aboveground dry weight biomass of herbaceous vegetation in the Great Plains: A comparison of three calculations to determine the least resource intensive and most accurate method

    Treesearch

    Ben Butler

    2007-01-01

    Obtaining accurate biomass measurements is often a resource-intensive task. Data collection crews often spend large amounts of time in the field clipping, drying, and weighing grasses to calculate the biomass of a given vegetation type. Such a problem is currently occurring in the Great Plains region of the Bureau of Indian Affairs. A study looked at six reservations...

  19. Just-in-Time Training: A Novel Approach to Quality Improvement Education.

    PubMed

    Knutson, Allison; Park, Nesha D; Smith, Denise; Tracy, Kelly; Reed, Danielle J W; Olsen, Steven L

    2015-01-01

    Just-in-time training (JITT) is accepted in medical education as a training method for newer concepts or seldom-performed procedures. Providing JITT to a large nursing staff may be an effective method to teach quality improvement (QI) initiatives. We sought to determine if JITT could increase knowledge of a specific nutrition QI initiative. Members of the nutrition QI team interviewed staff using the Frontline Contextual Inquiry to assess knowledge regarding the specific QI project. The inquiry was completed pre- and post-JITT. A JITT educational cart was created, which allowed trainers to bring the educational information to the bedside for a short, small group educational session. The results demonstrated a marked improvement in the knowledge of the frontline staff regarding our Vermont Oxford Network involvement and the specifics of the nutrition QI project. Just-in-time training can be a valuable and effective method to disseminate QI principles to a large audience of staff members.

  20. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http