Sample records for accuracy precision robustness

  1. Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System

    NASA Astrophysics Data System (ADS)

    Hu, Qing; Hu, Yuwei

    The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.

  2. Multi-wavelength approach towards on-product overlay accuracy and robustness

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John

    2018-03-01

    Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.

  3. Analysis of polonium-210 in food products and bioassay samples by isotope-dilution alpha spectrometry.

    PubMed

    Lin, Zhichao; Wu, Zhongyu

    2009-05-01

    A rapid and reliable radiochemical method coupled with a simple and compact plating apparatus was developed, validated, and applied for the analysis of (210)Po in variety of food products and bioassay samples. The method performance characteristics, including accuracy, precision, robustness, and specificity, were evaluated along with a detailed measurement uncertainty analysis. With high Po recovery, improved energy resolution, and effective removal of interfering elements by chromatographic extraction, the overall method accuracy was determined to be better than 5% with measurement precision of 10%, at 95% confidence level.

  4. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    NASA Astrophysics Data System (ADS)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  5. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  6. Instrument Quality Control.

    PubMed

    Jayakody, Chatura; Hull-Ryde, Emily A

    2016-01-01

    Well-defined quality control (QC) processes are used to determine whether a certain procedure or action conforms to a widely accepted standard and/or set of guidelines, and are important components of any laboratory quality assurance program (Popa-Burke et al., J Biomol Screen 14: 1017-1030, 2009). In this chapter, we describe QC procedures useful for monitoring the accuracy and precision of laboratory instrumentation, most notably automated liquid dispensers. Two techniques, gravimetric QC and photometric QC, are highlighted in this chapter. When used together, these simple techniques provide a robust process for evaluating liquid handler accuracy and precision, and critically underpin high-quality research programs.

  7. High precision tracking of a piezoelectric nano-manipulator with parameterized hysteresis compensation

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Zhang, Yangming

    2018-06-01

    High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.

  8. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    PubMed

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Determination of dabigatran, rivaroxaban and apixaban by ultra-performance liquid chromatography - tandem mass spectrometry (UPLC-MS/MS) and coagulation assays for therapy monitoring of novel direct oral anticoagulants.

    PubMed

    Schmitz, E M H; Boonen, K; van den Heuvel, D J A; van Dongen, J L J; Schellings, M W M; Emmen, J M A; van der Graaf, F; Brunsveld, L; van de Kerkhof, D

    2014-10-01

    Three novel direct oral anticoagulants (DOACs) have recently been registered by the Food and Drug Administration and European Medicines Agency Commission: dabigatran, rivaroxaban, and apixaban. To quantify DOACs in plasma, various dedicated coagulation assays have been developed. To develop and validate a reference ultra-performance liquid chromatography - tandem mass spectrometry (UPLC-MS/MS) method and to evaluate the analytical performance of several coagulation assays for quantification of dabigatran, rivaroxaban, and apixaban. The developed UPLC-MS/MS method was validated by determination of precision, accuracy, specificity, matrix effects, lower limits of detection, carry-over, recovery, stability, and robustness. The following coagulation assays were evaluated for accuracy and precision: laboratory-developed (LD) diluted thrombin time (dTT), Hemoclot dTT, Pefakit PiCT, ECA, Liquid anti-Xa, Biophen Heparin (LRT), and Biophen DiXal anti-Xa. Agreement between the various coagulation assays and UPLC-MS/MS was determined with random samples from patients using dabigatran or rivaroxaban. The UPLC-MS/MS method was shown to be accurate, precise, sensitive, stable, and robust. The dabigatran coagulation assay showing the best precision, accuracy and agreement with the UPLC-MS/MS method was the LD dTT test. For rivaroxaban, the anti-factor Xa assays were superior to the PiCT-Xa assay with regard to precision, accuracy, and agreement with the reference method. For apixaban, the Liquid anti-Xa assay was superior to the PiCT-Xa assay. Statistically significant differences were observed between the various coagulation assays as compared with the UPLC-MS/MS reference method. It is currently unknown whether these differences are clinically relevant. When DOACs are quantified with coagulation assays, comparison with a reference method as part of proficiency testing is therefore pivotal. © 2014 International Society on Thrombosis and Haemostasis.

  10. Prediction of beef carcass and meat traits from rearing factors in young bulls and cull cows.

    PubMed

    Soulat, J; Picard, B; Léger, S; Monteils, V

    2016-04-01

    The aim of this study was to predict the beef carcass and LM (thoracis part) characteristics and the sensory properties of the LM from rearing factors applied during the fattening period. Individual data from 995 animals (688 young bulls and 307 cull cows) in 15 experiments were used to establish prediction models. The data concerned rearing factors (13 variables), carcass characteristics (5 variables), LM characteristics (2 variables), and LM sensory properties (3 variables). In this study, 8 prediction models were established: dressing percentage and the proportions of fat tissue and muscle in the carcass to characterize the beef carcass; cross-sectional area of fibers (mean fiber area) and isocitrate dehydrogenase activity to characterize the LM; and, finally, overall tenderness, juiciness, and flavor intensity scores to characterize the LM sensory properties. A random effect was considered in each model: the breed for the prediction models for the carcass and LM characteristics and the trained taste panel for the prediction of the meat sensory properties. To evaluate the quality of prediction models, 3 criteria were measured: robustness, accuracy, and precision. The model was robust when the root mean square errors of prediction of calibration and validation sub-data sets were near to one another. Except for the mean fiber area model, the obtained predicted models were robust. The prediction models were considered to have a high accuracy when the mean prediction error (MPE) was ≤0.10 and to have a high precision when the was the closest to 1. The prediction of the characteristics of the carcass from the rearing factors had a high precision ( > 0.70) and a high prediction accuracy (MPE < 0.10), except for the fat percentage model ( = 0.67, MPE = 0.16). However, the predictions of the LM characteristics and LM sensory properties from the rearing factors were not sufficiently precise ( < 0.50) and accurate (MPE > 0.10). Only the flavor intensity of the beef score could be satisfactorily predicted from the rearing factors with high precision ( = 0.72) and accuracy (MPE = 0.10). All the prediction models displayed different effects of the rearing factors according to animal categories (young bulls or cull cows). In consequence, these prediction models display the necessary adaption of rearing factors during the fattening period according to animal categories to optimize the carcass traits according to animal categories.

  11. Enumeration of residual white blood cells in leukoreduced blood products: Comparing flow cytometry with a portable microscopic cell counter.

    PubMed

    Castegnaro, Silvia; Dragone, Patrizia; Chieregato, Katia; Alghisi, Alberta; Rodeghiero, Francesco; Astori, Giuseppe

    2016-04-01

    Transfusion of blood components is potentially associated to the risk of cell-mediated adverse events and current guidelines require a reduction of residual white blood cells (rWBC) below 1 × 10(6) WBC/unit. The reference method to enumerate rare events is the flow cytometry (FCM). The ADAM-rWBC microscopic cell counter has been proposed as an alternative: it measures leukocytes after their staining with propidium iodide. We have tested the Adam-rWBC for the ability to enumerate rWBC in red blood cells and concentrates. We have validated the flow cytometry (FCM) for linearity, precision accuracy and robustness and then the ADAM-rWBC results have been compared with the FCM. Our data confirm the linearity, accuracy, precision and robustness of the FCM. The ADAM-rWBC has revealed an adequate precision and accuracy. Even if the Bland-Altman analysis of the paired data has indicated that the two systems are comparable, it should be noted that the rWBC values obtained by the ADAM-rWBC were significantly higher compared to FCM. In conclusion, the Adam-rWBC cell counter could represent an alternative where FCM technology expertise is not available, even if the risk that borderline products could be misclassified exists. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.

    PubMed

    Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter

    2017-09-01

    Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A robust rotation-invariance displacement measurement method for a micro-/nano-positioning system

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Zhang, Xianmin; Wu, Heng; Li, Hai; Gan, Jinqiang

    2018-05-01

    A robust and high-precision displacement measurement method for a compliant mechanism-based micro-/nano-positioning system is proposed. The method is composed of an integer-pixel and a sub-pixel matching procedure. In the proposed algorithm (Pro-A), an improved ring projection transform (IRPT) and gradient information are used as features for approximating the coarse candidates and fine locations, respectively. Simulations are conducted and the results show that the Pro-A has the ability of rotation-invariance and strong robustness, with a theoretical accuracy of 0.01 pixel. To validate the practical performance, a series of experiments are carried out using a computer micro-vision and laser interferometer system (LIMS). The results demonstrate that both the LIMS and Pro-A can achieve high precision, while the Pro-A has better stability and adaptability.

  14. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  15. Simultaneous Determination of Potassium Clavulanate and Amoxicillin Trihydrate in Bulk, Pharmaceutical Formulations and in Human Urine Samples by UV Spectrophotometry

    PubMed Central

    Gujral, Rajinder Singh; Haque, Sk Manirul

    2010-01-01

    A simple and sensitive UV spectrophotometric method was developed and validated for the simultaneous determination of Potassium Clavulanate (PC) and Amoxicillin Trihydrate (AT) in bulk, pharmaceutical formulations and in human urine samples. The method was linear in the range of 0.2–8.5 μg/ml for PC and 6.4–33.6 μg/ml for AT. The absorbance was measured at 205 and 271 nm for PC and AT respectively. The method was validated with respect to accuracy, precision, specificity, ruggedness, robustness, limit of detection and limit of quantitation. This method was used successfully for the quality assessment of four PC and AT drug products and in human urine samples with good precision and accuracy. This is found to be simple, specific, precise, accurate, reproducible and low cost UV Spectrophotometric method. PMID:23675211

  16. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  17. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  18. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  19. Ariadne's Thread: A Robust Software Solution Leading to Automated Absolute and Relative Quantification of SRM Data.

    PubMed

    Nasso, Sara; Goetze, Sandra; Martens, Lennart

    2015-09-04

    Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.

  20. A second-order cell-centered Lagrangian ADER-MOOD finite volume scheme on multidimensional unstructured meshes for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri

    2018-04-01

    In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edmunds, D; Donovan, E

    Purpose: To determine whether the Microsoft Kinect Version 2 (Kinect v2), a commercial off-the-shelf (COTS) depth sensors designed for entertainment purposes, were robust to the radiotherapy treatment environment and could be suitable for monitoring of voluntary breath-hold compliance. This could complement current visual monitoring techniques, and be useful for heart sparing left breast radiotherapy. Methods: In-house software to control Kinect v2 sensors, and capture output information, was developed using the free Microsoft software development kit, and the Cinder creative coding C++ library. Each sensor was used with a 12m USB 3.0 active cable. A solid water block was used asmore » the object. The depth accuracy and precision of the sensors was evaluated by comparing Kinect reported distance to the object with a precision laser measurement across a distance range of 0.6m to 2.0 m. The object was positioned on a high-precision programmable motion platform and moved in two programmed motion patterns and Kinect reported distance logged. Robustness to the radiation environment was tested by repeating all measurements with a linear accelerator operating over a range of pulse repetition frequencies (6Hz to 400Hz) and dose rates 50 to 1500 monitor units (MU) per minute. Results: The complex, consistent relationship between true and measured distance was unaffected by the radiation environment, as was the ability to detect motion. Sensor precision was < 1 mm and the accuracy between 1.3 mm and 1.8 mm when a distance correction was applied. Both motion patterns were tracked successfully with a root mean squared error (RMSE) of 1.4 and 1.1 mm respectively. Conclusion: Kinect v2 sensors are capable of tracking pre-programmed motion patterns with an accuracy <2 mm and appear robust to the radiotherapy treatment environment. A clinical trial using Kinect v2 sensor for monitoring voluntary breath hold has ethical approval and is open to recruitment. The authors are supported by a National Institute of Health Research (NIHR) Career Development Fellowship (CDF-2013-06-005). Microsoft Corporation donated three sensors. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.« less

  2. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  3. Approach to method development and validation in capillary electrophoresis for enantiomeric purity testing of active basic pharmaceutical ingredients.

    PubMed

    Sokoliess, Torsten; Köller, Gerhard

    2005-06-01

    A chiral capillary electrophoresis system allowing the determination of the enantiomeric purity of an investigational new drug was developed using a generic method development approach for basic analytes. The method was optimized in terms of type and concentration of both cyclodextrin (CD) and electrolyte, buffer pH, temperature, voltage, and rinsing procedure. Optimal chiral separation of the analyte was obtained using an electrolyte with 2.5% carboxymethyl-beta-CD in 25 mM NaH2PO4 (pH 4.0). Interchanging the inlet and outlet vials after each run improved the method's precision. To assure the method's suitability for the control of enantiomeric impurities in pharmaceutical quality control, its specificity, linearity, precision, accuracy, and robustness were validated according to the requirements of the International Conference on Harmonization. The usefulness of our generic method development approach for the validation of robustness was demonstrated.

  4. Comparative study of quantitative phase imaging techniques for refractometry of optical fibers

    NASA Astrophysics Data System (ADS)

    de Dorlodot, Bertrand; Bélanger, Erik; Bérubé, Jean-Philippe; Vallée, Réal; Marquet, Pierre

    2018-02-01

    The refractive index difference profile of optical fibers is the key design parameter because it determines, among other properties, the insertion losses and propagating modes. Therefore, an accurate refractive index profiling method is of paramount importance to their development and optimization. Quantitative phase imaging (QPI) is one of the available tools to retrieve structural characteristics of optical fibers, including the refractive index difference profile. Having the advantage of being non-destructive, several different QPI methods have been developed over the last decades. Here, we present a comparative study of three different available QPI techniques, namely the transport-of-intensity equation, quadriwave lateral shearing interferometry and digital holographic microscopy. To assess the accuracy and precision of those QPI techniques, quantitative phase images of the core of a well-characterized optical fiber have been retrieved for each of them and a robust image processing procedure has been applied in order to retrieve their refractive index difference profiles. As a result, even if the raw images for all the three QPI methods were suffering from different shortcomings, our robust automated image-processing pipeline successfully corrected these. After this treatment, all three QPI techniques yielded accurate, reliable and mutually consistent refractive index difference profiles in agreement with the accuracy and precision of the refracted near-field benchmark measurement.

  5. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  6. Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.

    PubMed

    Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng

    2015-12-01

    This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.

  7. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Rapid Development and Validation of Improved Reversed-Phase High-performance Liquid Chromatography Method for the Quantification of Mangiferin, a Polyphenol Xanthone Glycoside in Mangifera indica

    PubMed Central

    Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam

    2017-01-01

    Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748

  9. Rapid Development and Validation of Improved Reversed-Phase High-performance Liquid Chromatography Method for the Quantification of Mangiferin, a Polyphenol Xanthone Glycoside in Mangifera indica.

    PubMed

    Naveen, P; Lingaraju, H B; Prasad, K Shyam

    2017-01-01

    Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.

  10. Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds

    PubMed Central

    Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.

    2013-01-01

    Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392

  11. Moving Liquids with Sound: The Physics of Acoustic Droplet Ejection for Robust Laboratory Automation in Life Sciences.

    PubMed

    Hadimioglu, Babur; Stearns, Richard; Ellson, Richard

    2016-02-01

    Liquid handling instruments for life science applications based on droplet formation with focused acoustic energy or acoustic droplet ejection (ADE) were introduced commercially more than a decade ago. While the idea of "moving liquids with sound" was known in the 20th century, the development of precise methods for acoustic dispensing to aliquot life science materials in the laboratory began in earnest in the 21st century with the adaptation of the controlled "drop on demand" acoustic transfer of droplets from high-density microplates for high-throughput screening (HTS) applications. Robust ADE implementations for life science applications achieve excellent accuracy and precision by using acoustics first to sense the liquid characteristics relevant for its transfer, and then to actuate transfer of the liquid with customized application of sound energy to the given well and well fluid in the microplate. This article provides an overview of the physics behind ADE and its central role in both acoustical and rheological aspects of robust implementation of ADE in the life science laboratory and its broad range of ejectable materials. © 2015 Society for Laboratory Automation and Screening.

  12. Simultaneous quantification of withanolides in Withania somnifera by a validated high-performance thin-layer chromatographic method.

    PubMed

    Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S

    2008-01-01

    This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.

  13. Performance Evaluation of a UWB-RFID System for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Phan, Chan T.; Arndt, D.; Ngo, P.; Gross, J.; Ni, Jianjun; Rafford, Melinda

    2006-01-01

    This talk presents a brief overview of the ultra-wideband (UWB) RFID system with emphasis on the performance evaluation of a commercially available UWB-RFID system. There are many RFID systems available today, but many provide just basic identification for auditing and inventory tracking. For applications that require high precision real time tracking, UWB technology has been shown to be a viable solution. The use of extremely short bursts of RF pulses offers high immunity to interference from other RF systems, precise tracking due to sub-nanosecond time resolution, and robust performance in multipath environments. The UWB-RFID system Sapphire DART (Digital Active RFID & Tracking) will be introduced in this talk. Laboratory testing using Sapphire DART is performed to evaluate its capability such as coverage area, accuracy, ease of operation, and robustness. Performance evaluation of this system in an operational environment (a receiving warehouse) for inventory tracking is also conducted. Concepts of using the UWB-RFID technology to track astronauts and assets are being proposed for space exploration.

  14. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  15. Composite adaptive control of belt polishing force for aero-engine blade

    NASA Astrophysics Data System (ADS)

    Zhsao, Pengbing; Shi, Yaoyao

    2013-09-01

    The existing methods for blade polishing mainly focus on robot polishing and manual grinding. Due to the difficulty in high-precision control of the polishing force, the blade surface precision is very low in robot polishing, in particular, quality of the inlet and exhaust edges can not satisfy the processing requirements. Manual grinding has low efficiency, high labor intensity and unstable processing quality, moreover, the polished surface is vulnerable to burn, and the surface precision and integrity are difficult to ensure. In order to further improve the profile accuracy and surface quality, a pneumatic flexible polishing force-exerting mechanism is designed and a dual-mode switching composite adaptive control(DSCAC) strategy is proposed, which combines Bang-Bang control and model reference adaptive control based on fuzzy neural network(MRACFNN) together. By the mode decision-making mechanism, Bang-Bang control is used to track the control command signal quickly when the actual polishing force is far away from the target value, and MRACFNN is utilized in smaller error ranges to improve the system robustness and control precision. Based on the mathematical model of the force-exerting mechanism, simulation analysis is implemented on DSCAC. Simulation results show that the output polishing force can better track the given signal. Finally, the blade polishing experiments are carried out on the designed polishing equipment. Experimental results show that DSCAC can effectively mitigate the influence of gas compressibility, valve dead-time effect, valve nonlinear flow, cylinder friction, measurement noise and other interference on the control precision of polishing force, which has high control precision, strong robustness, strong anti-interference ability and other advantages compared with MRACFNN. The proposed research achieves high-precision control of the polishing force, effectively improves the blade machining precision and surface consistency, and significantly reduces the surface roughness.

  16. A Novel MEMS Gyro North Finder Design Based on the Rotation Modulation Technique

    PubMed Central

    Zhang, Yongjian; Zhou, Bin; Song, Mingliang; Hou, Bo; Xing, Haifeng; Zhang, Rong

    2017-01-01

    Gyro north finders have been widely used in maneuvering weapon orientation, oil drilling and other areas. This paper proposes a novel Micro-Electro-Mechanical System (MEMS) gyroscope north finder based on the rotation modulation (RM) technique. Two rotation modulation modes (static and dynamic modulation) are applied. Compared to the traditional gyro north finders, only one single MEMS gyroscope and one MEMS accelerometer are needed, reducing the total cost since high-precision gyroscopes and accelerometers are the most expensive components in gyro north finders. To reduce the volume and enhance the reliability, wireless power and wireless data transmission technique are introduced into the rotation modulation system for the first time. To enhance the system robustness, the robust least square method (RLSM) and robust Kalman filter (RKF) are applied in the static and dynamic north finding methods, respectively. Experimental characterization resulted in a static accuracy of 0.66° and a dynamic repeatability accuracy of 1°, respectively, confirming the excellent potential of the novel north finding system. The proposed single gyro and single accelerometer north finding scheme is universal, and can be an important reference to both scientific research and industrial applications. PMID:28452936

  17. Supercritical fluid chromatography for GMP analysis in support of pharmaceutical development and manufacturing activities.

    PubMed

    Hicks, Michael B; Regalado, Erik L; Tan, Feng; Gong, Xiaoyi; Welch, Christopher J

    2016-01-05

    Supercritical fluid chromatography (SFC) has long been a preferred method for enantiopurity analysis in support of pharmaceutical discovery and development, but implementation of the technique in regulated GMP laboratories has been somewhat slow, owing to limitations in instrument sensitivity, reproducibility, accuracy and robustness. In recent years, commercialization of next generation analytical SFC instrumentation has addressed previous shortcomings, making the technique better suited for GMP analysis. In this study we investigate the use of modern SFC for enantiopurity analysis of several pharmaceutical intermediates and compare the results with the conventional HPLC approaches historically used for analysis in a GMP setting. The findings clearly illustrate that modern SFC now exhibits improved precision, reproducibility, accuracy and robustness; also providing superior resolution and peak capacity compared to HPLC. Based on these findings, the use of modern chiral SFC is recommended for GMP studies of stereochemistry in pharmaceutical development and manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  19. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.

  20. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.

  1. Development and validation of a UV-spectrophotometric method for the determination of pheniramine maleate and its stability studies

    NASA Astrophysics Data System (ADS)

    Raghu, M. S.; Basavaiah, K.; Ramesh, P. J.; Abdulrahman, Sameer A. M.; Vinay, K. B.

    2012-03-01

    A sensitive, precise, and cost-effective UV-spectrophotometric method is described for the determination of pheniramine maleate (PAM) in bulk drug and tablets. The method is based on the measurement of absorbance of a PAM solution in 0.1 N HCl at 264 nm. As per the International Conference on Harmonization (ICH) guidelines, the method was validated for linearity, accuracy, precision, limits of detection (LOD) and quantification (LOQ), and robustness and ruggedness. A linear relationship between absorbance and concentration of PAM in the range of 2-40 μg/ml with a correlation coefficient (r) of 0.9998 was obtained. The LOD and LOQ values were found to be 0.18 and 0.39 μg/ml PAM, respectively. The precision of the method was satisfactory: the value of relative standard deviation (RSD) did not exceed 3.47%. The proposed method was applied successfully to the determination of PAM in tablets with good accuracy and precision. Percentages of the label claims ranged from 101.8 to 102.01% with the standard deviation (SD) from 0.64 to 0.72%. The accuracy of the method was further ascertained by recovery studies via a standard addition procedure. In addition, the forced degradation of PAM was conducted in accordance with the ICH guidelines. Acidic and basic hydrolysis, thermal stress, peroxide, and photolytic degradation were used to assess the stability-indicating power of the method. A substantial degradation was observed during oxidative and alkaline degradations. No degradation was observed under other stress conditions.

  2. Validation of the Filovirus Plaque Assay for Use in Preclinical Studies

    PubMed Central

    Shurtleff, Amy C.; Bloomfield, Holly A.; Mort, Shannon; Orr, Steven A.; Audet, Brian; Whitaker, Thomas; Richards, Michelle J.; Bavari, Sina

    2016-01-01

    A plaque assay for quantitating filoviruses in virus stocks, prepared viral challenge inocula and samples from research animals has recently been fully characterized and standardized for use across multiple institutions performing Biosafety Level 4 (BSL-4) studies. After standardization studies were completed, Good Laboratory Practices (GLP)-compliant plaque assay method validation studies to demonstrate suitability for reliable and reproducible measurement of the Marburg Virus Angola (MARV) variant and Ebola Virus Kikwit (EBOV) variant commenced at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). The validation parameters tested included accuracy, precision, linearity, robustness, stability of the virus stocks and system suitability. The MARV and EBOV assays were confirmed to be accurate to ±0.5 log10 PFU/mL. Repeatability precision, intermediate precision and reproducibility precision were sufficient to return viral titers with a coefficient of variation (%CV) of ≤30%, deemed acceptable variation for a cell-based bioassay. Intraclass correlation statistical techniques for the evaluation of the assay’s precision when the same plaques were quantitated by two analysts returned values passing the acceptance criteria, indicating high agreement between analysts. The assay was shown to be accurate and specific when run on Nonhuman Primates (NHP) serum and plasma samples diluted in plaque assay medium, with negligible matrix effects. Virus stocks demonstrated stability for freeze-thaw cycles typical of normal usage during assay retests. The results demonstrated that the EBOV and MARV plaque assays are accurate, precise and robust for filovirus titration in samples associated with the performance of GLP animal model studies. PMID:27110807

  3. Validation of a Thin-Layer Chromatography for the Determination of Hydrocortisone Acetate and Lidocaine in a Pharmaceutical Preparation

    PubMed Central

    Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina

    2014-01-01

    A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform + acetone + ammonia (25%) in volume composition 8 : 2 : 0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75 ÷ 12.50 μg·spot−1, and from 1.00 ÷ 2.50 μg·spot−1 for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198 μg·spot−1 and LOD is 0.066 μg·spot−1. LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090 μg·spot−1, respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia. PMID:24526880

  4. Validation of a thin-layer chromatography for the determination of hydrocortisone acetate and lidocaine in a pharmaceutical preparation.

    PubMed

    Dołowy, Małgorzata; Kulpińska-Kucia, Katarzyna; Pyka, Alina

    2014-01-01

    A new specific, precise, accurate, and robust TLC-densitometry has been developed for the simultaneous determination of hydrocortisone acetate and lidocaine hydrochloride in combined pharmaceutical formulation. The chromatographic analysis was carried out using a mobile phase consisting of chloroform+acetone+ammonia (25%) in volume composition 8:2:0.1 and silica gel 60F254 plates. Densitometric detection was performed in UV at wavelengths 200 nm and 250 nm, respectively, for lidocaine hydrochloride and hydrocortisone acetate. The validation of the proposed method was performed in terms of specificity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, and robustness. The applied TLC procedure is linear in hydrocortisone acetate concentration range of 3.75÷12.50  μg·spot(-1), and from 1.00÷2.50  μg·spot(-1) for lidocaine hydrochloride. The developed method was found to be accurate (the value of the coefficient of variation CV [%] is less than 3%), precise (CV [%] is less than 2%), specific, and robust. LOQ of hydrocortisone acetate is 0.198  μg·spot(-1) and LOD is 0.066  μg·spot(-1). LOQ and LOD values for lidocaine hydrochloride are 0.270 and 0.090  μg·spot(-1), respectively. The assay value of both bioactive substances is consistent with the limits recommended by Pharmacopoeia.

  5. Validation of a new UNIX-based quantitative coronary angiographic system for the measurement of coronary artery lesions.

    PubMed

    Bell, M R; Britson, P J; Chu, A; Holmes, D R; Bresnahan, J F; Schwartz, R S

    1997-01-01

    We describe a method of validation of computerized quantitative coronary arteriography and report the results of a new UNIX-based quantitative coronary arteriography software program developed for rapid on-line (digital) and off-line (digital or cinefilm) analysis. The UNIX operating system is widely available in computer systems using very fast processors and has excellent graphics capabilities. The system is potentially compatible with any cardiac digital x-ray system for on-line analysis and has been designed to incorporate an integrated database, have on-line and immediate recall capabilities, and provide digital access to all data. The accuracy (mean signed differences of the observed minus the true dimensions) and precision (pooled standard deviations of the measurements) of the program were determined x-ray vessel phantoms. Intra- and interobserver variabilities were assessed from in vivo studies during routine clinical coronary arteriography. Precision from the x-ray phantom studies (6-In. field of view) for digital images was 0.066 mm and for digitized cine images was 0.060 mm. Accuracy was 0.076 mm (overestimation) for digital images compared to 0.008 mm for digitized cine images. Diagnostic coronary catheters were also used for calibration; accuracy.varied according to size of catheter and whether or not they were filled with iodinated contrast. Intra- and interobserver variabilities were excellent and indicated that coronary lesion measurements were relatively user-independent. Thus, this easy to use and very fast UNIX based program appears to be robust with optimal accuracy and precision for clinical and research applications.

  6. High throughput single cell counting in droplet-based microfluidics.

    PubMed

    Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie

    2017-05-02

    Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.

  7. The Timing Activities of the National Time and Frequency Standard Laboratory of the Telecommunication Laboratories, CHT Co. Ltd., Taiwan

    DTIC Science & Technology

    2009-11-01

    Way Satellite Time and Frequency Transfer ( TWSTFT ). To meet future needs of precision, accuracy, and robustness for UTC (TL), TL rebuilt the air...TWO-WAY SATELLITE TIME TRANSFER TL maintains three earth stations for TWSTFT experiments, as listed on Table 1. The TL01 station is for the...Asia-Pacific TWSTFT links. All TWSTFT measurements in this area are performed simultaneously by using the eight-receive-channel NICT modems. Hourly

  8. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  9. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  10. Static headspace gas chromatographic method for quantitative determination of residual solvents in pharmaceutical drug substances according to european pharmacopoeia requirements.

    PubMed

    Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan

    2004-11-19

    A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.

  11. A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty

    NASA Astrophysics Data System (ADS)

    Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.

    2016-10-01

    A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.

  12. A new fitting method for measurement of the curvature radius of a short arc with high precision

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhong, Hong; Chen, Xiao; Selami, Yassine; Zhao, Hui

    2018-07-01

    The measurement of an object with a short arc is widely encountered in scientific research and industrial production. As the most classic method of arc fitting, the least squares fitting method suffers from low precision when it is used for measurement of arcs with smaller central angles and fewer sampling points. The shorter the arc, the lower is the measurement accuracy. In order to improve the measurement precision of short arcs, a parameter constrained fitting method based on a four-parameter circle equation is proposed in this paper. The generalized Lagrange function was introduced together with the optimization by gradient descent method to reduce the influence from noise. The simulation and experimental results showed that the proposed method has high precision even when the central angle drops below 4° and it has good robustness when the noise standard deviation rises to 0.4 mm. This new fitting method is suitable for the high precision measurement of short arcs with smaller central angles without any prior information.

  13. A Contamination-Free Ultrahigh Precision Formation Flying Method for Micro-, Nano-, and Pico-Satellites with Nanometer Accuracy

    NASA Astrophysics Data System (ADS)

    Bae, Young K.

    2006-01-01

    Formation flying of clusters of micro-, nano- and pico-satellites has been recognized to be more affordable, robust and versatile than building a large monolithic satellite in implementing next generation space missions requiring large apertures or large sample collection areas and sophisticated earth imaging/monitoring. We propose a propellant free, thus contamination free, method that enables ultrahigh precision satellite formation flying with intersatellite distance accuracy of nm (10-9 m) at maximum estimated distances in the order of tens of km. The method is based on ultrahigh precision CW intracavity photon thrusters and tethers. The pushing-out force of the intracavity photon thruster and the pulling-in force of the tether tension between satellites form the basic force structure to stabilize crystalline-like structures of satellites and/or spacecrafts with a relative distance accuracy better than nm. The thrust of the photons can be amplified by up to tens of thousand times by bouncing them between two mirrors located separately on pairing satellites. For example, a 10 W photon thruster, suitable for micro-satellite applications, is theoretically capable of providing thrusts up to mN, and its weight and power consumption are estimated to be several kgs and tens of W, respectively. The dual usage of photon thruster as a precision laser source for the interferometric ranging system further simplifies the system architecture and minimizes the weight and power consumption. The present method does not require propellant, thus provides significant propulsion system mass savings, and is free from propellant exhaust contamination, ideal for missions that require large apertures composed of highly sensitive sensors. The system can be readily scaled down for the nano- and pico-satellite applications.

  14. A robust LC-MS/MS method for the determination of pidotimod in different biological matrixes and its application to in vivo and in vitro pharmacokinetic studies.

    PubMed

    Wang, Guangji; Wang, Qian; Rao, Tai; Shen, Boyu; Kang, Dian; Shao, Yuhao; Xiao, Jingcheng; Chen, Huimin; Liang, Yan

    2016-06-15

    Pidotimod, (R)-3-[(S)-(5-oxo-2-pyrrolidinyl) carbonyl]-thiazolidine-4-carboxylic acid, was frequently used to treat children with recurrent respiratory infections. Preclinical pharmacokinetics of pidotimod was still rarely reported to date. Herein, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to determine pidotimod in rat plasma, tissue homogenate and Caco-2 cells. In this process, phenacetin was chosen as the internal standard due to its similarity in chromatographic and mass spectrographic characteristics with pidotimod. The plasma calibration curves were established within the concentration range of 0.01-10.00μg/mL, and similar linear curves were built using tissue homogenate and Caco-2 cells. The calibration curves for all biological samples showed good linearity (r>0.99) over the concentration ranges tested. The intra- and inter-day precision (RSD, %) values were below 15% and accuracy (RE, %) was ranged from -15% to 15% at all quality control levels. For plasma, tissue homogenate and Caco-2 cells, no obvious matrix effect was found, and the average recoveries were all above 75%. Thus, the method demonstrated excellent accuracy, precision and robustness for high throughput applications, and was then successfully applied to the studies of absorption in rat plasma, distribution in rat tissues and intracellular uptake characteristics in Caco-2 cells for pidotimod. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  16. Stability indicating validated HPLC method for quantification of levothyroxine with eight degradation peaks in the presence of excipients.

    PubMed

    Shah, R B; Bryant, A; Collier, J; Habib, M J; Khan, M A

    2008-08-06

    A simple, sensitive, accurate, and robust stability indicating analytical method is presented for identification, separation, and quantitation of l-thyroxine and eight degradation impurities with an internal standard. The method was used in the presence of commonly used formulation excipients such as butylated hydroxyanisole, povidone, crospovidone, croscarmellose sodium, mannitol, sucrose, acacia, lactose monohydrate, confectionary sugar, microcrystalline cellulose, sodium laurel sulfate, magnesium stearate, talc, and silicon dioxide. The two active thyroid hormones: 3,3',5,5'-tetra-iodo-l-thyronine (l-thyroxine-T4) and 3,3',5-tri-iodo-l-thyronine (T3) and degradation products including di-iodothyronine (T2), thyronine (T0), tyrosine (Tyr), di-iodotyrosine (DIT), mono-iodotyrosine (MIT), 3,3',5,5'-tetra-iodothyroacetic acid (T4AA) and 3,3',5-tri-iodothyroacetic acid (T3AA) were assayed by the current method. The separation of l-thyroxine and eight metabolites along with theophylline (internal standard) was achieved using a C18 column (25 degrees C) with a mobile phase of trifluoroacetic acid (0.1%, v/v, pH 3)-acetonitrile in gradient elution at 0.8 ml/min at 223 nm. The sample diluent was 0.01 M methanolic NaOH. Method was validated according to FDA, USP, and ICH guidelines for inter-day accuracy, precision, and robustness after checking performance with system suitability. Tyr (4.97 min), theophylline (9.09 min), MIT (9.55 min), DIT (11.37 min), T0 (11.63 min), T2 (14.47 min), T3 (16.29 min), T4 (17.60 min), T3AA (22.71 min), and T4AA (24.83 min) separated in a single chromatographic run. Linear relationship (r2>0.99) was observed between the peak area ratio and the concentrations for all of the compounds within the range of 2-20 microg/ml. The total time for analysis, equilibration and recovery was 40 min. The method was shown to separate well from commonly employed formulation excipients. Accuracy ranged from 95 to 105% for T4 and 90 to 110% for all other compounds. Precision was <2% for all the compounds. The method was found to be robust with minor changes in injection volume, flow rate, column temperature, and gradient ratio. Validation results indicated that the method shows satisfactory linearity, precision, accuracy, and ruggedness and also stress degradation studies indicated that the method can be used as stability indicating method for l-thyroxine in the presence of excipients.

  17. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331, pp.296-304, (2013)

  18. Simultaneous Determination of Crypto-Chlorogenic Acid, Isoquercetin, and Astragalin Contents in Moringa oleifera Leaf Extracts by TLC-Densitometric Method.

    PubMed

    Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee

    2013-01-01

    Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100-500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts.

  19. The design of high precision temperature control system for InGaAs short-wave infrared detector

    NASA Astrophysics Data System (ADS)

    Wang, Zheng-yun; Hu, Yadong; Ni, Chen; Huang, Lin; Zhang, Aiwen; Sun, Xiao-bing; Hong, Jin

    2018-02-01

    The InGaAs Short-wave infrared detector is a temperature-sensitive device. Accurate temperature control can effectively reduce the background signal and improve detection accuracy, detection sensitivity, and the SNR of the detection system. Firstly, the relationship between temperature and detection background, NEP is analyzed, the principle of TEC and formula between cooling power, cooling current and hot-cold interface temperature difference are introduced. Then, the high precision constant current drive circuit based on triode voltage control current, and an incremental algorithm model based on deviation tracking compensation and PID control are proposed, which effectively suppresses the temperature overshoot, overcomes the temperature inertia, and has strong robustness. Finally, the detector and temperature control system are tested. Results show that: the lower of detector temperature, the smaller the temperature fluctuation, the higher the detection accuracy and the detection sensitivity. The temperature control system achieves the high temperature control with the temperature control rate is 7 8°C/min and the temperature fluctuation is better than +/-0. 04°C.

  20. Uncertainty characterization of particle location from refocused plenoptic images.

    PubMed

    Hall, Elise M; Guildenbecher, Daniel R; Thurow, Brian S

    2017-09-04

    Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.

  1. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor.

    PubMed

    Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping

    2009-11-10

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  2. UV Spectrophotometric Method for Estimation of Polypeptide-K in Bulk and Tablet Dosage Forms

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Singh, S. Kumar; Gulati, M.; Vaidya, Y.

    2016-01-01

    An analytical method for estimation of polypeptide-k using UV spectrophotometry has been developed and validated for bulk as well as tablet dosage form. The developed method was validated for linearity, precision, accuracy, specificity, robustness, detection, and quantitation limits. The method has shown good linearity over the range from 100.0 to 300.0 μg/ml with a correlation coefficient of 0.9943. The percentage recovery of 99.88% showed that the method was highly accurate. The precision demonstrated relative standard deviation of less than 2.0%. The LOD and LOQ of the method were found to be 4.4 and 13.33, respectively. The study established that the proposed method is reliable, specific, reproducible, and cost-effective for the determination of polypeptide-k.

  3. The prediction in computer color matching of dentistry based on GA+BP neural network.

    PubMed

    Li, Haisheng; Lai, Long; Chen, Li; Lu, Cheng; Cai, Qiang

    2015-01-01

    Although the use of computer color matching can reduce the influence of subjective factors by technicians, matching the color of a natural tooth with a ceramic restoration is still one of the most challenging topics in esthetic prosthodontics. Back propagation neural network (BPNN) has already been introduced into the computer color matching in dentistry, but it has disadvantages such as unstable and low accuracy. In our study, we adopt genetic algorithm (GA) to optimize the initial weights and threshold values in BPNN for improving the matching precision. To our knowledge, we firstly combine the BPNN with GA in computer color matching in dentistry. Extensive experiments demonstrate that the proposed method improves the precision and prediction robustness of the color matching in restorative dentistry.

  4. An efficient mixed-precision, hybrid CPU-GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Barnes, Daniel C

    2012-01-01

    Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less

  5. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  6. Magnetoresistive Current Sensors for High Accuracy, High Bandwidth Current Measurement in Spacecraft Power Electronics

    NASA Astrophysics Data System (ADS)

    Slatter, Rolf; Goffin, Benoit

    2014-08-01

    The usage of magnetoresistive (MR) current sensors is increasing steadily in the field of power electronics. Current sensors must not only be accurate and dynamic, but must also be compact and robust. The MR effect is the basis for current sensors with a unique combination of precision and bandwidth in a compact package. A space-qualifiable magnetoresistive current sensor with high accuracy and high bandwidth is being jointly developed by the sensor manufacturer Sensitec and the spacecraft power electronics supplier Thales Alenia Space (T AS) Belgium. Test results for breadboards incorporating commercial-off-the-shelf (COTS) sensors are presented as well as an application example in the electronic control and power unit for the thrust vector actuators of the Ariane5-ME launcher.

  7. Performance of a proposed determinative method for p-TSA in rainbow trout fillet tissue and bridging the proposed method with a method for total chloramine-T residues in rainbow trout fillet tissue

    USGS Publications Warehouse

    Meinertz, J.R.; Stehly, G.R.; Gingerich, W.H.; Greseth, Shari L.

    2001-01-01

    Chloramine-T is an effective drug for controlling fish mortality caused by bacterial gill disease. As part of the data required for approval of chloramine-T use in aquaculture, depletion of the chloramine-T marker residue (para-toluenesulfonamide; p-TSA) from edible fillet tissue of fish must be characterized. Declaration of p-TSA as the marker residue for chloramine-T in rainbow trout was based on total residue depletion studies using a method that used time consuming and cumbersome techniques. A simple and robust method recently developed is being proposed as a determinative method for p-TSA in fish fillet tissue. The proposed determinative method was evaluated by comparing accuracy and precision data with U.S. Food and Drug Administration criteria and by bridging the method to the former method for chloramine-T residues. The method accuracy and precision fulfilled the criteria for determinative methods; accuracy was 92.6, 93.4, and 94.6% with samples fortified at 0.5X, 1X, and 2X the expected 1000 ng/g tolerance limit for p-TSA, respectively. Method precision with tissue containing incurred p-TSA at a nominal concentration of 1000 ng/g ranged from 0.80 to 8.4%. The proposed determinative method was successfully bridged with the former method. The concentrations of p-TSA developed with the proposed method were not statistically different at p < 0.05 from p-TSA concentrations developed with the former method.

  8. Spectrophotometric method development and validation for determination of chlorpheniramine maleate in bulk and controlled release tablets.

    PubMed

    Ashfaq, Maria; Sial, Ali Akber; Bushra, Rabia; Rehman, Atta-Ur; Baig, Mirza Tasawur; Huma, Ambreen; Ahmed, Maryam

    2018-01-01

    Spectrophotometric technique is considered to be the simplest and operator friendly among other available analytical methods for pharmaceutical analysis. The objective of the study was to develop a precise, accurate and rapid UV-spectrophotometric method for the estimation of chlorpheniramine maleate (CPM) in pure and solid pharmaceutical formulation. Drug absorption was measured in various solvent systems including 0.1N HCl (pH 1.2), acetate buffer (pH 4.5), phosphate buffer (pH 6.8) and distil water (pH 7.0). Method validation was performed as per official guidelines of ICH, 2005. High drug absorption was observed in 0.1N HCl medium with λ max of 261nm. The drug showed the good linearity from 20 to 60μg/mL solution concentration with the correlation coefficient linear regression equation Y= 0.1853 X + 0.1098 presenting R 2 value of 0.9998. The method accuracy was evaluated by the percent drug recovery, presents more than 99% drug recovery at three different levels assessed. The % RSD value <1 was computed for inter and intraday analysis indicating the high accuracy and precision of the developed technique. The developed method is robust because it shows no any significant variation in with minute changes. The LOD and LOQ values were assessed to be 2.2μg/mL and 6.6μg/mL respectively. The investigated method proved its sensitivity, precision and accuracy hence could be successfully used to estimate the CPM content in bulk and pharmaceutical matrix tablets.

  9. Atlas-based liver segmentation and hepatic fat-fraction assessment for clinical trials.

    PubMed

    Yan, Zhennan; Zhang, Shaoting; Tan, Chaowei; Qin, Hongxing; Belaroussi, Boubakeur; Yu, Hui Jing; Miller, Colin; Metaxas, Dimitris N

    2015-04-01

    Automated assessment of hepatic fat-fraction is clinically important. A robust and precise segmentation would enable accurate, objective and consistent measurement of hepatic fat-fraction for disease quantification, therapy monitoring and drug development. However, segmenting the liver in clinical trials is a challenging task due to the variability of liver anatomy as well as the diverse sources the images were acquired from. In this paper, we propose an automated and robust framework for liver segmentation and assessment. It uses single statistical atlas registration to initialize a robust deformable model to obtain fine segmentation. Fat-fraction map is computed by using chemical shift based method in the delineated region of liver. This proposed method is validated on 14 abdominal magnetic resonance (MR) volumetric scans. The qualitative and quantitative comparisons show that our proposed method can achieve better segmentation accuracy with less variance comparing with two other atlas-based methods. Experimental results demonstrate the promises of our assessment framework. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Development and validation of a new HPLC-UV method for the simultaneous determination of triclabendazole and ivermectin B1a in a pharmaceutical formulation.

    PubMed

    Shurbaji, Maher; Abu Al Rub, Mohamad H; Saket, Munib M; Qaisi, Ali M; Salim, Maher L; Abu-Nameh, Eyad S M

    2010-01-01

    A rapid, simple, and sensitive RP-HPLC analytical method was developed for the simultaneous determination of triclabendazole and ivermectin in combination using a C18 RP column. The mobile phase was acetonitrile-methanol-water-acetic acid (56 + 36 + 7.5 + 0.5, v/v/v/v) at a pH of 4.35 and flow rate of 1.0 mL/min. A 245 nm UV detection wavelength was used. Complete validation, including linearity, accuracy, recovery, LOD, LOQ, precision, robustness, stability, and peak purity, was performed. The calibration curve was linear over the range 50.09-150.26 microg/mL for triclabendazole with r = 0.9999 and 27.01-81.02 microg/mL for ivermectin with r = 0.9999. Calculated LOD and LOQ for triclabendazole were 0.03 and 0.08 microg/mL, respectively, and for ivermectin 0.07 and 0.20 microg/mL, respectively. The intraday precision obtained was 98.71% with RSD of 0.87% for triclabendazole and 100.79% with RSD 0.73% for ivermectin. The interday precision obtained was 99.51% with RSD of 0.35% for triclabendazole and 100.55% with RSD of 0.59% for ivermectin. Robustness was also studied, and there was no significant variation of the system suitability of the analytical method with small changes in experimental parameters.

  11. Research progress of on-the-go soil parameter sensors based on NIRS

    NASA Astrophysics Data System (ADS)

    An, Xiaofei; Meng, Zhijun; Wu, Guangwei; Guo, Jianhua

    2014-11-01

    Both the ever-increasing prices of fertilizer and growing ecological concern over chemical run-off into sources of drinking water have brought the issues of precision agriculture and site-specific management to the forefront of present technological development within agriculture and ecology. Soil is an important and basic element in agriculture production. Acquisition of soil information plays an important role in precision agriculture. The soil parameters include soil total nitrogen, phosporus, potassium, soil organic matter, soil moisture, electrical conductivity and pH value and so on. Field rapid acquisition to all the kinds of soil physical and chemical parameters is one of the most important research directions. And soil parameter real-time monitoring is also the trend of future development in precision agriculture. While developments in precision agriculture and site-specific management procedures have made significant in-roads on these issues and many researchers have developed effective means to determine soil properties, routinely obtaining robust on-the-go measurements of soil properties which are reliable enough to drive effective fertilizer application remains a challenge. NIRS technology provides a new method to obtain soil parameter with low cost and rapid advantage. In this paper, research progresses of soil on-the-go spectral sensors at domestic and abroad was combed and analyzed. There is a need for the sensing system to perform at least six key indexes for any on-the-go soil spectral sensor to be successful. The six indexes are detection limit, specificity, robustness, accuracy, cost and easy-to-use. Both the research status and problems were discussed. Finally, combining the national conditions of china, development tendency of on-the-go soil spectral sensors was proposed. In the future, on-the-go soil spectral sensors with reliable enough, sensitive enough and continuous detection would become popular in precision agriculture.

  12. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  13. An ultra-high performance liquid chromatography method to determine the skin penetration of an octyl methoxycinnamate-loaded liquid crystalline system.

    PubMed

    Prado, A H; Borges, M C; Eloy, J O; Peccinini, R G; Chorilli, M

    2017-10-01

    Cutaneous penetration is a critical factor in the use of sunscreen, as the compounds should not reach systemic circulation in order to avoid the induction of toxicity. The evaluation of the skin penetration and permeation of the UVB filter octyl methoxycinnamate (OMC) is essential for the development of a successful sunscreen formulation. Liquid-crystalline systems are innovative and potential carriers of OMC, which possess several advantages, including controlled release and protection of the filter from degradation. In this study, a new and effective method was developed using ultra-high performance liquid chromatography (UPLC) with ultraviolet detection (UV) for the quantitative analysis of penetration of OMC-loaded liquid crystalline systems into the skin. The following parameters were assessed in the method: selectivity, linearity, precision, accuracy, robustness, limit of detection (LOD), and limit of quantification (LOQ). The analytical curve was linear in the range from 0.25 to 250 μg.m-1, precise, with a standard deviation of 0.05-1.24%, with an accuracy in the range from 96.72 to 105.52%, and robust, with adequate values for the LOD and LOQ of 0.1 and 0.25 μg.mL -1, respectively. The method was successfully used to determine the in vitro skin permeation of OMC-loaded liquid crystalline systems. The results of the in vitro tests on Franz cells showed low cutaneous permeation and high retention of the OMC, particularly in the stratum corneum, owing to its high lipophilicity, which is desirable for a sunscreen formulation.

  14. Performance characteristics of an ion chromatographic method for the quantitation of citrate and phosphate in pharmaceutical solutions.

    PubMed

    Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances

    2007-01-01

    The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.

  15. A novel modification of the Turing test for artificial intelligence and robotics in healthcare.

    PubMed

    Ashrafian, Hutan; Darzi, Ara; Athanasiou, Thanos

    2015-03-01

    The increasing demands of delivering higher quality global healthcare has resulted in a corresponding expansion in the development of computer-based and robotic healthcare tools that rely on artificially intelligent technologies. The Turing test was designed to assess artificial intelligence (AI) in computer technology. It remains an important qualitative tool for testing the next generation of medical diagnostics and medical robotics. Development of quantifiable diagnostic accuracy meta-analytical evaluative techniques for the Turing test paradigm. Modification of the Turing test to offer quantifiable diagnostic precision and statistical effect-size robustness in the assessment of AI for computer-based and robotic healthcare technologies. Modification of the Turing test to offer robust diagnostic scores for AI can contribute to enhancing and refining the next generation of digital diagnostic technologies and healthcare robotics. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Assessment of a virtual functional prototyping process for the rapid manufacture of passive-dynamic ankle-foot orthoses.

    PubMed

    Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J

    2013-10-01

    Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.

  17. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  18. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    PubMed

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.

  19. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  20. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  1. Robust adaptive extended Kalman filtering for real time MR-thermometry guided HIFU interventions.

    PubMed

    Roujol, Sébastien; de Senneville, Baudouin Denis; Hey, Silke; Moonen, Chrit; Ries, Mario

    2012-03-01

    Real time magnetic resonance (MR) thermometry is gaining clinical importance for monitoring and guiding high intensity focused ultrasound (HIFU) ablations of tumorous tissue. The temperature information can be employed to adjust the position and the power of the HIFU system in real time and to determine the therapy endpoint. The requirement to resolve both physiological motion of mobile organs and the rapid temperature variations induced by state-of-the-art high-power HIFU systems require fast MRI-acquisition schemes, which are generally hampered by low signal-to-noise ratios (SNRs). This directly limits the precision of real time MR-thermometry and thus in many cases the feasibility of sophisticated control algorithms. To overcome these limitations, temporal filtering of the temperature has been suggested in the past, which has generally an adverse impact on the accuracy and latency of the filtered data. Here, we propose a novel filter that aims to improve the precision of MR-thermometry while monitoring and adapting its impact on the accuracy. For this, an adaptive extended Kalman filter using a model describing the heat transfer for acoustic heating in biological tissues was employed together with an additional outlier rejection to address the problem of sparse artifacted temperature points. The filter was compared to an efficient matched FIR filter and outperformed the latter in all tested cases. The filter was first evaluated on simulated data and provided in the worst case (with an approximate configuration of the model) a substantial improvement of the accuracy by a factor 3 and 15 during heat up and cool down periods, respectively. The robustness of the filter was then evaluated during HIFU experiments on a phantom and in vivo in porcine kidney. The presence of strong temperature artifacts did not affect the thermal dose measurement using our filter whereas a high measurement variation of 70% was observed with the FIR filter.

  2. A laboratory information management system for the analysis of tritium (3H) in environmental waters.

    PubMed

    Belachew, Dagnachew Legesse; Terzer-Wassmuth, Stefan; Wassenaar, Leonard I; Klaus, Philipp M; Copia, Lorenzo; Araguás, Luis J Araguás; Aggarwal, Pradeep

    2018-07-01

    Accurate and precise measurements of low levels of tritium ( 3 H) in environmental waters are difficult to attain due to complex steps of sample preparation, electrolytic enrichment, liquid scintillation decay counting, and extensive data processing. We present a Microsoft Access™ relational database application, TRIMS (Tritium Information Management System) to assist with sample and data processing of tritium analysis by managing the processes from sample registration and analysis to reporting and archiving. A complete uncertainty propagation algorithm ensures tritium results are reported with robust uncertainty metrics. TRIMS will help to increase laboratory productivity and improve the accuracy and precision of 3 H assays. The software supports several enrichment protocols and LSC counter types. TRIMS is available for download at no cost from the IAEA at www.iaea.org/water. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Simultaneous Determination of Crypto-Chlorogenic Acid, Isoquercetin, and Astragalin Contents in Moringa oleifera Leaf Extracts by TLC-Densitometric Method

    PubMed Central

    Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee

    2013-01-01

    Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100–500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts. PMID:23533530

  4. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817

  5. High-precision gauging of metal rings

    NASA Astrophysics Data System (ADS)

    Carlin, Mats; Lillekjendlie, Bjorn

    1994-11-01

    Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.

  6. Fast cat-eye effect target recognition based on saliency extraction

    NASA Astrophysics Data System (ADS)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  7. Qualitative analysis of pure and adulterated canola oil via SIMCA

    NASA Astrophysics Data System (ADS)

    Basri, Katrul Nadia; Khir, Mohd Fared Abdul; Rani, Rozina Abdul; Sharif, Zaiton; Rusop, M.; Zoolfakar, Ahmad Sabirin

    2018-05-01

    This paper demonstrates the utilization of near infrared (NIR) spectroscopy to classify pure and adulterated sample of canola oil. Soft Independent Modeling Class Analogies (SIMCA) algorithm was implemented to discriminate the samples to its classes. Spectral data obtained was divided using Kennard Stone algorithm into training and validation dataset by a fixed ratio of 7:3. The model accuracy obtained based on the model built is 0.99 whereas the sensitivity and precision are 0.92 and 1.00. The result showed the classification model is robust to perform qualitative analysis of canola oil for future application.

  8. The Accuracy and Correction of Fuel Consumption from Controller Area Network Broadcast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeffrey D; Wood, Eric W

    Fuel consumption (FC) has always been an important factor in vehicle cost. With the advent of electronically controlled engines, the controller area network (CAN) broadcasts information about engine and vehicle performance, including fuel use. However, the accuracy of the FC estimates is uncertain. In this study, the researchers first compared CAN-broadcasted FC against physically measured fuel use for three different types of trucks, which revealed the inaccuracies of CAN-broadcast fueling estimates. To match precise gravimetric fuel-scale measurements, polynomial models were developed to correct the CAN-broadcasted FC. Lastly, the robustness testing of the correction models was performed. The training cycles inmore » this section included a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. The mean relative differences were reduced noticeably.« less

  9. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.

  10. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Hierarchical feature selection for erythema severity estimation

    NASA Astrophysics Data System (ADS)

    Wang, Li; Shi, Chenbo; Shu, Chang

    2014-10-01

    At present PASI system of scoring is used for evaluating erythema severity, which can help doctors to diagnose psoriasis [1-3]. The system relies on the subjective judge of doctors, where the accuracy and stability cannot be guaranteed [4]. This paper proposes a stable and precise algorithm for erythema severity estimation. Our contributions are twofold. On one hand, in order to extract the multi-scale redness of erythema, we design the hierarchical feature. Different from traditional methods, we not only utilize the color statistical features, but also divide the detect window into small window and extract hierarchical features. Further, a feature re-ranking step is introduced, which can guarantee that extracted features are irrelevant to each other. On the other hand, an adaptive boosting classifier is applied for further feature selection. During the step of training, the classifier will seek out the most valuable feature for evaluating erythema severity, due to its strong learning ability. Experimental results demonstrate the high precision and robustness of our algorithm. The accuracy is 80.1% on the dataset which comprise 116 patients' images with various kinds of erythema. Now our system has been applied for erythema medical efficacy evaluation in Union Hosp, China.

  12. Development and validation of a high throughput assay for the quantification of multiple green tea-derived catechins in human plasma.

    PubMed

    Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B

    2018-06-19

    A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.

  13. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    NASA Astrophysics Data System (ADS)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  14. Analysis of 34S in Individual Organic Compounds by Coupled GC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Sessions, A. L.; Amrani, A.; Adkins, J. F.

    2009-12-01

    The abundances of 2H, 13C, and 15N in organic compounds have been extremely useful in many aspects of biogeochemistry. While sulfur plays an equally important role in many earth-surface processes, the isotopes of sulfur in organic matter have not been extensively employed in large part because there has been no direct route to the analysis of 34S in individual organic compounds. To remedy this, we have developed a highly sensitive and robust method for the analysis of 34S in individual organic compounds by coupled gas chromatography (GC) and multicollector inductively-coupled plasma mass spectrometry (ICP-MS). Isobaric interference from O2+ is minimized by employing dry plasma conditions, and is cleanly resolved at all masses using medium resolution on the Thermo Neptune ICP-MS. Correction for mass bias is accomplished using standard-sample bracketing with peaks of SF6 reference gas. The precision of measured δ34S values approaches 0.1‰ for analytes containing >40 pmol S, and is better than 0.5‰ for those containing as little as 6 pmol S. External accuracy is better than 0.3‰. Integrating only the center of chromatographic peaks, rather than the entire peak, offers significant gain in precision and chromatographic resolution with minimal effect on accuracy, but requires further study for verification as a routine method. Coelution of organic compounds that do not contain S can cause degraded analytical precision and accuracy. As a demonstration of the potential for this new method, we will present data from 3 sample types: individual organosulfur compounds from crude oil, dimethyl sulfide from seawater, and trace H2S from bacterial culture headspace.

  15. Quality by design: a systematic and rapid liquid chromatography and mass spectrometry method for eprosartan mesylate and its related impurities using a superficially porous particle column.

    PubMed

    Kalariya, Pradipbhai D; Kumar Talluri, Murali V N; Gaitonde, Vinay D; Devrukhakar, Prashant S; Srinivas, Ragampeta

    2014-08-01

    The present work describes the systematic development of a robust, precise, and rapid reversed-phase liquid chromatography method for the simultaneous determination of eprosartan mesylate and its six impurities using quality-by-design principles. The method was developed in two phases, screening and optimization. During the screening phase, the most suitable stationary phase, organic modifier, and pH were identified. The optimization was performed for secondary influential parameters--column temperature, gradient time, and flow rate using eight experiments--to examine multifactorial effects of parameters on the critical resolution and generated design space representing the robust region. A verification experiment was performed within the working design space and the model was found to be accurate. This study also describes other operating features of the column packed with superficially porous particles that allow very fast separations at pressures available in most liquid chromatography instruments. Successful chromatographic separation was achieved in less than 7 min using a fused-core C18 (100 mm × 2.1 mm, 2.6 μm) column with linear gradient elution of 10 mM ammonium formate (pH 3.0) and acetonitrile as the mobile phase. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance with the International Conference on Harmonization Q2 (R1) guidelines. The impurities were identified by liquid chromatography with mass spectrometry. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  17. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  18. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  19. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  20. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization

    PubMed Central

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-01-01

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096

  1. A Mobile Outdoor Augmented Reality Method Combining Deep Learning Object Detection and Spatial Relationships for Geovisualization.

    PubMed

    Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun

    2017-08-24

    The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.

  2. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  3. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  4. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  5. Accuracy assessment of BDS precision orbit determination and the influence analysis of site distribution

    NASA Astrophysics Data System (ADS)

    Chen, Ming; Guo, Jiming; Li, Zhicai; Zhang, Peng; Wu, Junli; Song, Weiwei

    2017-04-01

    BDS precision orbit determination is a key content of the BDS application, but the inadequate ground stations and the poor distribution of the network are the main reasons for the low accuracy of BDS precise orbit determination. In this paper, the BDS precise orbit determination results are obtained by using the IGS MGEX stations and the Chinese national reference stations,the accuracy of orbit determination of GEO, IGSO and MEO is 10.3cm, 2.8cm and 3.2cm, and the radial accuracy is 1.6cm,1.9cm and 1.5cm.The influence of ground reference stations distribution on BDS precise orbit determination is studied. The results show that the Chinese national reference stations contribute significantly to the BDS orbit determination, the overlap precision of GEO/IGSO/MEO satellites were improved by 15.5%, 57.5% and 5.3% respectively after adding the Chinese stations.Finally, the results of ODOP(orbit distribution of precision) and SLR are verified. Key words: BDS precise orbit determination; accuracy assessment;Chinese national reference stations;reference stations distribution;orbit distribution of precision

  6. Accurate determination of reference materials and natural isolates by means of quantitative (1)h NMR spectroscopy.

    PubMed

    Frank, Oliver; Kreissl, Johanna Karoline; Daschner, Andreas; Hofmann, Thomas

    2014-03-26

    A fast and precise proton nuclear magnetic resonance (qHNMR) method for the quantitative determination of low molecular weight target molecules in reference materials and natural isolates has been validated using ERETIC 2 (Electronic REference To access In vivo Concentrations) based on the PULCON (PULse length based CONcentration determination) methodology and compared to the gravimetric results. Using an Avance III NMR spectrometer (400 MHz) equipped with a broad band observe (BBO) probe, the qHNMR method was validated by determining its linearity, range, precision, and accuracy as well as robustness and limit of quantitation. The linearity of the method was assessed by measuring samples of l-tyrosine, caffeine, or benzoic acid in a concentration range between 0.3 and 16.5 mmol/L (r(2) ≥ 0.99), whereas the interday and intraday precisions were found to be ≤2%. The recovery of a range of reference compounds was ≥98.5%, thus demonstrating the qHNMR method as a precise tool for the rapid quantitation (~15 min) of food-related target compounds in reference materials and natural isolates such as nucleotides, polyphenols, or cyclic peptides.

  7. Anchor-Free Localization Method for Mobile Targets in Coal Mine Wireless Sensor Networks

    PubMed Central

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes’ location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines. PMID:22574048

  8. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  9. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  10. Three different spectrophotometric methods manipulating ratio spectra for determination of binary mixture of Amlodipine and Atorvastatin

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeiny, Badr A.

    2011-12-01

    Three simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra are developed for the simultaneous determination of Amlodipine besylate (AM) and Atorvastatin calcium (AT) in tablet dosage forms. The first method is first derivative of the ratio spectra ( 1DD), the second is ratio subtraction and the third is the method of mean centering of ratio spectra. The calibration curve is linear over the concentration range of 3-40 and 8-32 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Standard deviation is <1.5 in the assay of raw materials and tablets. Methods are validated as per ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit.

  11. Three different methods for determination of binary mixture of Amlodipine and Atorvastatin using dual wavelength spectrophotometry

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-03-01

    Three simple, specific, accurate and precise spectrophotometric methods depending on the proper selection of two wavelengths are developed for the simultaneous determination of Amlodipine besylate (AML) and Atorvastatin calcium (ATV) in tablet dosage forms. The first method is the new Ratio Difference method, the second method is the Bivariate method and the third one is the Absorbance Ratio method. The calibration curve is linear over the concentration range of 4-40 and 8-32 μg/mL for AML and ATV, respectively. These methods are tested by analyzing synthetic mixtures of the above drugs and they are applied to commercial pharmaceutical preparation of the subjected drugs. Methods are validated according to the ICH guidelines and accuracy, precision, repeatability and robustness are found to be within the acceptable limit. The mathematical explanation of the procedures is illustrated.

  12. A novel stability-indicating UPLC method development and validation for the determination of seven impurities in various diclofenac pharmaceutical dosage forms.

    PubMed

    Azougagh, M; Elkarbane, M; Bakhous, K; Issmaili, S; Skalli, A; Iben Moussad, S; Benaji, B

    2016-09-01

    An innovative simple, fast, precise and accurate ultra-high performance liquid chromatography (UPLC) method was developed for the determination of diclofenac (Dic) along with its impurities including the new dimer impurity in various pharmaceutical dosage forms. An Acquity HSS T3 (C18, 100×2.1mm, 1.8μm) column in gradient mode was used with mobile phase comprising of phosphoric acid, which has a pH value of 2.3 and methanol. The flow rate and the injection volume were set at 0.35ml·min(-1) and 1μl, respectively, and the UV detection was carried out at 254nm by using photodiode array detector. Dic was subjected to stress conditions from acid, base, hydrolytic, thermal, oxidative and photolytic degradation. The new developed method was successfully validated in accordance to the International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection, limit of quantitation, precision, linearity, accuracy and robustness. The degradation products were well resolved from main peak and its seven impurities, proving the specificity power of the method. The method showed good linearity with consistent recoveries for Dic content and its impurities. The relative percentage of standard deviation obtained for the repeatability and intermediate precision experiments was less than 3% and LOQ was less than 0.5μg·ml(-1) for all compounds. The new proposed method was found to be accurate, precise, specific, linear and robust. In addition, the method was successfully applied for the assay determination of Dic and its impurities in the several pharmaceutical dosage forms. Copyright © 2016 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  13. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    NASA Astrophysics Data System (ADS)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  14. Illumination Invariant Change Detection (iicd): from Earth to Mars

    NASA Astrophysics Data System (ADS)

    Wan, X.; Liu, J.; Qin, M.; Li, S. Y.

    2018-04-01

    Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.

  15. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  16. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  17. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  18. Hybrid Orientation Based Human Limbs Motion Tracking Method

    PubMed Central

    Glonek, Grzegorz; Wojciechowski, Adam

    2017-01-01

    One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832

  19. Long fiber Bragg grating sensor interrogation using discrete-time microwave photonic filtering techniques.

    PubMed

    Ricchiuti, Amelia Lavinia; Barrera, David; Sales, Salvador; Thevenaz, Luc; Capmany, José

    2013-11-18

    A novel technique for interrogating photonic sensors based on long fiber Bragg gratings (FBGs) is presented and experimentally demonstrated, dedicated to detect the presence and the precise location of several spot events. The principle of operation is based on a technique used to analyze microwave photonics (MWP) filters. The long FBGs are used as quasi-distributed sensors. Several hot-spots can be detected along the FBG with a spatial accuracy under 0.5 mm using a modulator and a photo-detector (PD) with a modest bandwidth of less than 1 GHz. The proposed interrogation system is intrinsically robust against environmental changes.

  20. Diode laser-based thermometry using two-line atomic fluorescence of indium and gallium

    NASA Astrophysics Data System (ADS)

    Borggren, Jesper; Weng, Wubin; Hosseinnia, Ali; Bengtsson, Per-Erik; Aldén, Marcus; Li, Zhongshan

    2017-12-01

    A robust and relatively compact calibration-free thermometric technique using diode lasers two-line atomic fluorescence (TLAF) for reactive flows at atmospheric pressures is investigated. TLAF temperature measurements were conducted using indium and, for the first time, gallium atoms as temperature markers. The temperature was measured in a multi-jet burner running methane/air flames providing variable temperatures ranging from 1600 to 2000 K. Indium and gallium were found to provide a similar accuracy of 2.7% and precision of 1% over the measured temperature range. The reliability of the TLAF thermometry was further tested by performing simultaneous rotational CARS measurements in the same experiments.

  1. Validated HPLC determination of 2-[(dimethylamino)methyl]cyclohexanone, an impurity in tramadol, using a precolumn derivatisation reaction with 2,4-dinitrophenylhydrazine.

    PubMed

    Medvedovici, Andrei; Albu, Florin; Farca, Alexandru; David, Victor

    2004-01-27

    A new method for the determination of 2-[(dimethylamino)methyl]cyclohexanone (DAMC) in Tramadol (as active substance or active ingredient in pharmaceutical formulations) is described. The method is based on the derivatisation of 2-[(dimethylamino)methyl]cyclohexanone with 2,4-dinitrophenylhydrazine (2,4-DNPH) in acidic conditions followed by a reversed-phase liquid chromatographic separation with UV detection. The method is simple, selective, quantitative and allows the determination of 2-[(dimethylamino)methyl]cyclohexanone at the low ppm level. The proposed method was validated with respect to selectivity, precision, linearity, accuracy and robustness.

  2. Simultaneous 3D localization of multiple MR-visible markers in fully reconstructed MR images: proof-of-concept for subsecond position tracking.

    PubMed

    Thörmer, Gregor; Garnov, Nikita; Moche, Michael; Haase, Jürgen; Kahn, Thomas; Busse, Harald

    2012-04-01

    To determine whether a greatly reduced spatial resolution of fully reconstructed projection MR images can be used for the simultaneous 3D localization of multiple MR-visible markers and to assess the feasibility of a subsecond position tracking for clinical purposes. Miniature, inductively coupled RF coils were imaged in three orthogonal planes with a balanced steady-state free precession (SSFP) sequence and automatically localized using a two-dimensional template fitting and a subsequent three-dimensional (3D) matching of the coordinates. Precision, accuracy, speed and robustness of 3D localization were assessed for decreasing in-plane resolutions (0.6-4.7 mm). The feasibility of marker tracking was evaluated at the lowest resolution by following a robotically driven needle on a complex 3D trajectory. Average 3D precision and accuracy, sensitivity and specificity of localization ranged between 0.1 and 0.4 mm, 0.5 and 1.0 mm, 100% and 95%, and 100% and 96%, respectively. At the lowest resolution, imaging and localization took ≈350 ms and provided an accuracy of ≈1.0 mm. In the tracking experiment, the needle was clearly depicted on the oblique scan planes defined by the markers. Image-based marker localization at a greatly reduced spatial resolution is considered a feasible approach to monitor reference points or rigid instruments at subsecond update rates. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Evaluation for Water Conservation in Agriculture: Using a Multi-Method Econometric Approach

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Eaton, D. J.

    2012-12-01

    Since the 1960's, farmers have implemented new irrigation technology to increase crop production and planting acreage. At that time, technology responded to the increasing demand for food due to world population growth. Currently, the problem of decreased water supply threatens to limit agricultural production. Uncertain precipitation patterns, from prolonged droughts to irregular rains, will continue to hamper planting operations, and farmers are further limited by an increased competition for water from rapidly growing urban areas. Irrigation technology promises to reduce water usage while maintaining or increasing farm yields. The challenge for water managers and policy makers is to quantify and redistribute these efficiency gains as a source of 'new water.' Using conservation in farming as a source of 'new water' requires accurately quantifying the efficiency gains of irrigation technology under farmers' actual operations and practices. From a water resource management and policy perspective, the efficiency gains from conservation in farming can be redistributed to municipal, industrial and recreational uses. This paper presents a methodology that water resource managers can use to statistically verify the water savings attributable to conservation technology. The specific conservation technology examined in this study is precision leveling, and the study includes a mixed-methods approach using four different econometric models: Ordinary Least Squares, Fixed Effects, Propensity Score Matching, and Hierarchical Linear Models. These methods are used for ex-post program evaluation where random assignment is not possible, and they could be employed to evaluate agricultural conservation programs, where participation is often self-selected. The principal method taken in this approach is Hierarchical Linear Models (HLM), a useful model for agriculture because it incorporates the hierarchical nature of the data (fields, tenants, and landowners) as well as crop rotation (fields in and out of production). The other three methods provide verification of the accuracy of the HLM model and create a robust comparison of the water savings estimates. Seventeen factors were used to isolate the effect of precision leveling from variations in climate, investments in other irrigation improvements, and farmers' management skills. These statistical analyses yield accurate water savings estimates because they consider farmers' actual irrigation technology and practices. Results suggest that savings from water conservation technology under farmers' actual production systems and management are less than those reported by experimental field studies. These water savings measure the 'in situ' effect of the technology, considering farmers' actual irrigation practices and technology. In terms of the accuracy of the models, HLM provides the most precise estimate of the impact of precision leveling on a field's water usage. The HLM estimate was within the 95% confidence interval of the other three models, thus verifying the accuracy and robustness of the statistical findings and model.

  4. Cumulative detection probabilities and range accuracy of a pulsed Geiger-mode avalanche photodiode laser ranging system

    NASA Astrophysics Data System (ADS)

    Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan

    2017-10-01

    Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.

  5. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  6. Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas

    PubMed Central

    Moreau, Julien; Ambellouis, Sébastien; Ruichek, Yassine

    2017-01-01

    A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). PMID:28106746

  7. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    NASA Astrophysics Data System (ADS)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  8. Mineral element analyses of switchgrass biomass: comparison of the accuracy and precision of laboratories

    USDA-ARS?s Scientific Manuscript database

    Mineral concentration of plant biomass can affect its use in thermal conversion to energy. The objective of this study was to compare the precision and accuracy of university and private laboratories that conduct mineral analyses of plant biomass on a fee basis. Accuracy and precision of the laborat...

  9. Timing of the Late Paleozoic Ice Age: A Review of the Status Quo and New U-Pb Zircon Ages From Southern Gondwana

    NASA Astrophysics Data System (ADS)

    Mundil, R.; Griffis, N. P.; Keller, C. B.; Fedorchuk, N.; Montanez, I. P.; Isbell, J.; Vesely, F.; Iannuzzi, R.

    2017-12-01

    Throughout the Carboniferous and Permian Late Paleozoic Ice Age (LPIA), glaciations in southern Gondwana exerted a profound influence on global climate and environment, ocean chemistry, and the nature of sedimentary processes. The LPIA is widely regarded as an analogue for Pleistocene glaciations. Our understanding of the latter, as well as the validity of predictions for the future global climate and environment, depends therefore on our ability to reconstruct the LPIA. A robust chronostratigraphic framework built on high precision/high accuracy geochronology is crucial for the reconstruction of events and processes that occurred during the LPIA, particularly in the absence of high-resolution terrestrial biostratigraphic constraints that apply to both near- and far-field proxy records. The occurrence of volcaniclastic layers containing primary volcanic zircon at many levels throughout southern Gondwana makes such a reconstruction feasible, but complications inevitably arise due to the mixing of older age components with primary volcanic crystals, as well as the potential of unrecognized open system behavior to produce spurious younger ages. These pitfalls cause age dispersion that may be difficult to interpret, or is unrecognized if low precision geochronological techniques are used, resulting in inaccurate radioisotopic ages. Our current efforts in the Parana Basin (Southern Brazil) and the Karoo Basin (South Africa/Namibia) concentrate on building a robust and exportable chronostratigraphic framework based on U-Pb zircon CA-TIMS ages with sub-permil level precision combined with Bayesian approaches for resolving the eruption age of dispersed age spectra to facilitate the reconstruction of glaciogenic processes through the Carboniferous-Permian transition, as well as their implications for global sea level, atmospheric pCO2 and ocean chemistry. We will also review currently available geochronological data from contemporaneous Australian successions and their potential for robust correlations and paleo-environmental reconstruction.

  10. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    PubMed

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  11. Potential improvements aimed at high precision δ13C isotopic ratio determinations in CO2 mixtures using optical absorption spectrometry.

    PubMed

    Koulikov, Serguei; Assonov, Sergey; Fajgelj, Ales; Tans, Pieter

    2018-07-01

    The manuscript explores some advantages and limitations of laser based optical spectroscopy, aimed at achieving robust, high-reproducibility 13 C 16 O 2 and 12 C 16 O 2 ratio determinations on the VPDB-CO 2 δ 13 C scale by measuring the absorbance of line pairs of 13 C 16 O 2 and 12 C 16 O 2 . In particular, the sensitivities of spectroscopic lines to both pressure (P) and temperature (T) are discussed. Based on the considerations and estimations presented, a level of reproducibility of the 13 C 16 O 2 / 12 C 16 O 2 ratio determinations may be achieved of about 10 -6 . Thus one may establish an optical spectroscopic measurement technique for robust, high-precision 13 C 16 O 2 and 12 C 16 O 2 ratio measurements aimed at very low uncertainty. (Notably, creating such an optical instrument and developing technical solutions is beyond the scope of this paper.) The total combined uncertainty will also include the uncertainty component(s) related to the accuracy of calibration on the VPDB-CO 2 δ 13 C scale. Addressing high-accuracy calibrations is presently not straightforward - absolute numerical values of 13 C/ 12 C for the VPDB-CO 2 scale are not well known. Traditional stable isotope mass-spectrometry uses calibrations vs CO 2 evolved from the primary carbonate reference materials; which can hardly be used for calibrating commercial optical stable isotope analysers. In contrast to mass-spectrometry, the major advantage of the laser-based spectrometric technique detailed in this paper is its high robustness. Therefore one can introduce a new spectrometric δ 13 C characterisation method which, being once well-calibrated on the VPDB-CO 2 scale, may not require any further (re-)calibrations. This can be used for characterisation of δ 13 C in CO 2 -in-air mixtures with high precision and also with high accuracy. If this technique can be realised with the estimated long-term reproducibility (order of 10 -6 ), it could potentially serve as a more convenient Optical Transfer Standard (OTS), characterising large amounts of CO 2 gas mixtures on the VPDB-CO 2 δ 13 C scale without having to compare to carbonate-evolved CO 2 . Furthermore, if the OTS method proves to be successful, it might be considered for re-defining the VPDB-CO 2 δ 13 C-scale as the ratio of selected CO 2 spectroscopic absorbance lines measured at pre-defined T & P conditions. The approach can also be expanded to δ 18 O characterisation (using 16 O 12 C 18 O and 16 O 12 C 16 O absorbance lines) of CO 2 gas mixtures and potentially to other isotope ratios of other gases. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Application of geo-spatial technology in schistosomiasis modelling in Africa: a review.

    PubMed

    Manyangadze, Tawanda; Chimbari, Moses John; Gebreslasie, Michael; Mukaratirwa, Samson

    2015-11-04

    Schistosomiasis continues to impact socio-economic development negatively in sub-Saharan Africa. The advent of spatial technologies, including geographic information systems (GIS), Earth observation (EO) and global positioning systems (GPS) assist modelling efforts. However, there is increasing concern regarding the accuracy and precision of the current spatial models. This paper reviews the literature regarding the progress and challenges in the development and utilization of spatial technology with special reference to predictive models for schistosomiasis in Africa. Peer-reviewed papers identified through a PubMed search using the following keywords: geo-spatial analysis OR remote sensing OR modelling OR earth observation OR geographic information systems OR prediction OR mapping AND schistosomiasis AND Africa were used. Statistical uncertainty, low spatial and temporal resolution satellite data and poor validation were identified as some of the factors that compromise the precision and accuracy of the existing predictive models. The need for high spatial resolution of remote sensing data in conjunction with ancillary data viz. ground-measured climatic and environmental information, local presence/absence intermediate host snail surveys as well as prevalence and intensity of human infection for model calibration and validation are discussed. The importance of a multidisciplinary approach in developing robust, spatial data capturing, modelling techniques and products applicable in epidemiology is highlighted.

  13. Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings

    PubMed Central

    Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo

    2018-01-01

    In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393

  14. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application

    PubMed Central

    Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G

    2016-01-01

    Background Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. Methods MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Results Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. Conclusions In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. PMID:27071647

  15. Development and validation of a liquid chromatography method for the simultaneous determination of eight water-soluble vitamins in multivitamin formulations and human urine.

    PubMed

    Patil, Suyog S; Srivastava, Ashwini K

    2013-01-01

    A simple, precise, and rapid RPLC method has been developed without incorporation of any ion-pair reagent for the simultaneous determination of vitamin C (C) and seven B-complex vitamins, viz, thiamine hydrochloride (B1), pyridoxine hydrochloride (B6), nicotinamide (B3), cyanocobalamine (B12), folic acid, riboflavin (B2), and 4-aminobenzoic acid (Bx). Separations were achieved within 12.0 min at 30 degrees C by gradient elution on an RP C18 column using a mobile phase consisting of a mixture of 15 mM ammonium formate buffer and 0.1% triethylamine adjusted to pH 4.0 with formic acid and acetonitrile. Simultaneous UV detection was performed at 275 and 360 nm. The method was validated for system suitability, LOD, LOQ, linearity, precision, accuracy, specificity, and robustness in accordance with International Conference on Harmonization guidelines. The developed method was implemented successfully for determination of the aforementioned vitamins in pharmaceutical formulations containing an individual vitamin, in their multivitamin combinations, and in human urine samples. The calibration curves for all analytes showed good linearity, with coefficients of correlation higher than 0.9998. Accuracy, intraday repeatability (n = 6), and interday repeatability (n = 7) were found to be satisfactory.

  16. Rapid and sensitive spectrofluorimetric determination of enrofloxacin, levofloxacin and ofloxacin with 2,3,5,6-tetrachloro- p-benzoquinone

    NASA Astrophysics Data System (ADS)

    Ulu, Sevgi Tatar

    2009-06-01

    A highly sensitive spectrofluorimetric method was developed for the first time, for the analysis of three fluoroquinolones (FQ) antibacterials, namely enrofloxacin (ENR), levofloxacin (LEV) and ofloxacin (OFL) in pharmaceutical preparations through charge transfer (CT) complex formation with 2,3,5,6-tetrachloro- p-benzoquinone (chloranil,CLA). At the optimum reaction conditions, the FQ-CLA complexes showed excitation maxima ranging from 359 to 363 nm and emission maxima ranging from 442 to 488 nm. Rectilinear calibration graphs were obtained in the concentration range of 50-1000, 50-1000 and 25-500 ng mL -1 for ENR, LEV and OFL, respectively. The detection limit was found to be 17 ng mL -1 for ENR, 17 ng mL -1 for LEV, 8 ng mL -1 for OFL, respectively. Excipients used as additive in commercial formulations did not interfere in the analysis. The method was validated according to the ICH guidelines with respect to specificity, linearity, accuracy, precision and robustness. The proposed method was successfully applied to the analysis of pharmaceutical preparations. The results obtained were in good agreement with those obtained using the official method; no significant difference in the accuracy and precision as revealed by the accepted values of t- and F-tests, respectively.

  17. Validation and Uncertainty Estimation of an Ecofriendly and Stability-Indicating HPLC Method for Determination of Diltiazem in Pharmaceutical Preparations

    PubMed Central

    Sadeghi, Fahimeh; Navidpour, Latifeh; Bayat, Sima; Afshar, Minoo

    2013-01-01

    A green, simple, and stability-indicating RP-HPLC method was developed for the determination of diltiazem in topical preparations. The separation was based on a C18 analytical column using a mobile phase consisted of ethanol: phosphoric acid solution (pH = 2.5) (35 : 65, v/v). Column temperature was set at 50°C and quantitation was achieved with UV detection at 240 nm. In forced degradation studies, the drug was subjected to oxidation, hydrolysis, photolysis, and heat. The method was validated for specificity, selectivity, linearity, precision, accuracy, and robustness. The applied procedure was found to be linear in diltiazem concentration range of 0.5–50 μg/mL (r 2 = 0.9996). Precision was evaluated by replicate analysis in which % relative standard deviation (RSD) values for areas were found below 2.0. The recoveries obtained (99.25%–101.66%) ensured the accuracy of the developed method. The degradation products as well as the pharmaceutical excipients were well resolved from the pure drug. The expanded uncertainty (5.63%) of the method was also estimated from method validation data. Accordingly, the proposed validated and sustainable procedure was proved to be suitable for routine analyzing and stability studies of diltiazem in pharmaceutical preparations. PMID:24163778

  18. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  19. Students as Toolmakers: Refining the Results in the Accuracy and Precision of a Trigonometric Activity

    ERIC Educational Resources Information Center

    Igoe, D. P.; Parisi, A. V.; Wagner, S.

    2017-01-01

    Smartphones used as tools provide opportunities for the teaching of the concepts of accuracy and precision and the mathematical concept of arctan. The accuracy and precision of a trigonometric experiment using entirely mechanical tools is compared to one using electronic tools, such as a smartphone clinometer application and a laser pointer. This…

  20. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed Central

    El-Amrawy, Fatema

    2015-01-01

    Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039

  1. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed

    El-Amrawy, Fatema; Nounou, Mohamed Ismail

    2015-10-01

    The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.

  2. Sensitivity, accuracy, and precision issues in opto-electronic holography based on fiber optics and high-spatial- and high-digitial-resolution cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.

    2002-06-01

    Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.

  3. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus.

  4. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  5. High-Precision In Situ 87Sr/86Sr Analyses through Microsampling on Solid Samples: Applications to Earth and Life Sciences

    PubMed Central

    Di Salvo, Sara; Casalini, Martina; Marchionni, Sara; Adani, Teresa; Ulivi, Maurizio; Tommasini, Simone; Avanzinelli, Riccardo; Mazza, Paul P. A.; Francalanci, Lorella

    2018-01-01

    An analytical protocol for high-precision, in situ microscale isotopic investigations is presented here, which combines the use of a high-performing mechanical microsampling device and high-precision TIMS measurements on micro-Sr samples, allowing for excellent results both in accuracy and precision. The present paper is a detailed methodological description of the whole analytical procedure from sampling to elemental purification and Sr-isotope measurements. The method offers the potential to attain isotope data at the microscale on a wide range of solid materials with the use of minimally invasive sampling. In addition, we present three significant case studies for geological and life sciences, as examples of the various applications of microscale 87Sr/86Sr isotope ratios, concerning (i) the pre-eruptive mechanisms triggering recent eruptions at Nisyros volcano (Greece), (ii) the dynamics involved with the initial magma ascent during Eyjafjallajökull volcano's (Iceland) 2010 eruption, which are usually related to the precursory signals of the eruption, and (iii) the environmental context of a MIS 3 cave bear, Ursus spelaeus. The studied cases show the robustness of the methods, which can be also be applied in other areas, such as cultural heritage, archaeology, petrology, and forensic sciences. PMID:29850369

  6. Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.

    PubMed

    Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A

    2010-01-10

    As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.

  7. Double the dates and go for Bayes - Impacts of model choice, dating density and quality on chronologies

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.

    2018-05-01

    Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.

  8. Impact of orbit, clock and EOP errors in GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Hackman, C.

    2012-12-01

    Precise point positioning (PPP; [1]) has gained ever-increasing usage in GNSS carrier-phase positioning, navigation and timing (PNT) since its inception in the late 1990s. In this technique, high-precision satellite clocks, satellite ephemerides and earth-orientation parameters (EOPs) are applied as fixed input by the user in order to estimate receiver/location-specific quantities such as antenna coordinates, troposphere delay and receiver-clock corrections. This is in contrast to "network" solutions, in which (typically) less-precise satellite clocks, satellite ephemerides and EOPs are used as input, and in which these parameters are estimated simultaneously with the receiver/location-specific parameters. The primary reason for increased PPP application is that it offers most of the benefits of a network solution with a smaller computing cost. In addition, the software required to do PPP positioning can be simpler than that required for network solutions. Finally, PPP permits high-precision positioning of single or sparsely spaced receivers that may have few or no GNSS satellites in common view. A drawback of PPP is that the accuracy of the results depend directly on the accuracy of the supplied orbits, clocks and EOPs, since these parameters are not adjusted during the processing. In this study, we will examine the impact of orbit, EOP and satellite clock estimates on PPP solutions. Our primary focus will be the impact of these errors on station coordinates; however the study may be extended to error propagation into receiver-clock corrections and/or troposphere estimates if time permits. Study motivation: the United States Naval Observatory (USNO) began testing PPP processing using its own predicted orbits, clocks and EOPs in Summer 2012 [2]. The results of such processing could be useful for real- or near-real-time applications should they meet accuracy/precision requirements. Understanding how errors in satellite clocks, satellite orbits and EOPs propagate into PPP positioning and timing results allows researchers to focus their improvement efforts in areas most in need of attention. The initial study will be conducted using the simulation capabilities of Bernese GPS Software and extended to using real data if time permits. [1] J.F. Zumberge, M.B. Heflin, D.C. Jefferson, M.M. Watkins and F.H. Webb, Precise point positioning for the efficient and robust analysis of GPS data from large networks, J. Geophys. Res., 102(B3), 5005-5017, doi:10.1029/96JB03860, 1997. [2] C. Hackman, S.M. Byram, V.J. Slabinski and J.C. Tracey, Near-real-time and other high-precision GNSS-based orbit/clock/earth-orientation/troposphere parameters available from USNO, Proc. 2012 ION Joint Navigation Conference, 15 pp., in press, 2012.

  9. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. An accelerated image matching technique for UAV orthoimage registration

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  11. Posture Detection Based on Smart Cushion for Wheelchair Users

    PubMed Central

    Ma, Congcong; Li, Wenfeng; Gravina, Raffaele; Fortino, Giancarlo

    2017-01-01

    The postures of wheelchair users can reveal their sitting habit, mood, and even predict health risks such as pressure ulcers or lower back pain. Mining the hidden information of the postures can reveal their wellness and general health conditions. In this paper, a cushion-based posture recognition system is used to process pressure sensor signals for the detection of user’s posture in the wheelchair. The proposed posture detection method is composed of three main steps: data level classification for posture detection, backward selection of sensor configuration, and recognition results compared with previous literature. Five supervised classification techniques—Decision Tree (J48), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Naive Bayes, and k-Nearest Neighbor (k-NN)—are compared in terms of classification accuracy, precision, recall, and F-measure. Results indicate that the J48 classifier provides the highest accuracy compared to other techniques. The backward selection method was used to determine the best sensor deployment configuration of the wheelchair. Several kinds of pressure sensor deployments are compared and our new method of deployment is shown to better detect postures of the wheelchair users. Performance analysis also took into account the Body Mass Index (BMI), useful for evaluating the robustness of the method across individual physical differences. Results show that our proposed sensor deployment is effective, achieving 99.47% posture recognition accuracy. Our proposed method is very competitive for posture recognition and robust in comparison with other former research. Accurate posture detection represents a fundamental basic block to develop several applications, including fatigue estimation and activity level assessment. PMID:28353684

  12. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.

    PubMed

    Qi, Jun; Liu, Guo-Ping

    2017-11-06

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.

  13. Multi-oriented windowed harmonic phase reconstruction for robust cardiac strain imaging.

    PubMed

    Cordero-Grande, Lucilio; Royuela-del-Val, Javier; Sanz-Estébanez, Santiago; Martín-Fernández, Marcos; Alberola-López, Carlos

    2016-04-01

    The purpose of this paper is to develop a method for direct estimation of the cardiac strain tensor by extending the harmonic phase reconstruction on tagged magnetic resonance images to obtain more precise and robust measurements. The extension relies on the reconstruction of the local phase of the image by means of the windowed Fourier transform and the acquisition of an overdetermined set of stripe orientations in order to avoid the phase interferences from structures outside the myocardium and the instabilities arising from the application of a gradient operator. Results have shown that increasing the number of acquired orientations provides a significant improvement in the reproducibility of the strain measurements and that the acquisition of an extended set of orientations also improves the reproducibility when compared with acquiring repeated samples from a smaller set of orientations. Additionally, biases in local phase estimation when using the original harmonic phase formulation are greatly diminished by the one here proposed. The ideas here presented allow the design of new methods for motion sensitive magnetic resonance imaging, which could simultaneously improve the resolution, robustness and accuracy of motion estimates. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. On-board orbit determination for low thrust LEO-MEO transfer by Consider Kalman Filtering and multi-constellation GNSS

    NASA Astrophysics Data System (ADS)

    Menzione, Francesco; Renga, Alfredo; Grassi, Michele

    2017-09-01

    In the framework of the novel navigation scenario offered by the next generation satellite low thrust autonomous LEO-to-MEO orbit transfer, this study proposes and tests a GNSS based navigation system aimed at providing on-board precise and robust orbit determination strategy to override rising criticalities. The analysis introduces the challenging design issues to simultaneously deal with the variable orbit regime, the electric thrust control and the high orbit GNSS visibility conditions. The Consider Kalman Filtering approach is here proposed as the filtering scheme to process the GNSS raw data provided by a multi-antenna/multi-constellation receiver in presence of uncertain parameters affecting measurements, actuation and spacecraft physical properties. Filter robustness and achievable navigation accuracy are verified using a high fidelity simulation of the low-thrust rising scenario and performance are compared with the one of a standard Extended Kalman Filtering approach to highlight the advantages of the proposed solution. Performance assessment of the developed navigation solution is accomplished for different transfer phases.

  15. A multi-center study benchmarks software tools for label-free proteome quantification

    PubMed Central

    Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan

    2016-01-01

    The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404

  16. A multicenter study benchmarks software tools for label-free proteome quantification.

    PubMed

    Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan

    2016-11-01

    Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.

  17. A system for real-time measurement of the brachial artery diameter in B-mode ultrasound images.

    PubMed

    Gemignani, Vincenzo; Faita, Francesco; Ghiadoni, Lorenzo; Poggianti, Elisa; Demi, Marcello

    2007-03-01

    The measurement of the brachial artery diameter is frequently used in clinical studies for evaluating the flow-mediated dilation and, in conjunction with the blood pressure value, for assessing arterial stiffness. This paper presents a system for computing the brachial artery diameter in real-time by analyzing B-mode ultrasound images. The method is based on a robust edge detection algorithm which is used to automatically locate the two walls of the vessel. The measure of the diameter is obtained with subpixel precision and with a temporal resolution of 25 samples/s, so that the small dilations induced by the cardiac cycle can also be retrieved. The algorithm is implemented on a standalone video processing board which acquires the analog video signal from the ultrasound equipment. Results are shown in real-time on a graphical user interface. The system was tested both on synthetic ultrasound images and in clinical studies of flow-mediated dilation. Accuracy, robustness, and intra/inter observer variability of the method were evaluated.

  18. Analysis of cocoa flavanols and procyanidins (DP 1-10) in cocoa-containing ingredients and products by rapid resolution liquid chromatography: single-laboratory validation.

    PubMed

    Machonis, Philip R; Jones, Matthew A; Kwik-Uribe, Catherine

    2014-01-01

    Recently, a multilaboratory validation (MLV) of AOAC Official Method 2012.24 for the determination of cocoa flavanols and procyanidins (CF-CP) in cocoa-based ingredients and products determined that the method was robust, reliable, and transferrable. Due to the complexity of the CF-CP molecules, this method required a run time exceeding 1 h to achieve acceptable separations. To address this issue, a rapid resolution normal phase LC method was developed, and a single-laboratory validation (SLV) study conducted. Flavanols and procyanidins with a degree of polymerization (DP) up to 10 were eluted in 15 min using a binary gradient applied to a diol stationary phase, detected using fluorescence detection, and reported as a total sum of DP 1-10. Quantification was achieved using (-)-epicatechin-based relative response factors for DP 2-10. Spike recovery samples and seven different types of cocoa-based samples were analyzed to evaluate the accuracy, precision, LOD, LOQ, and linearity of the method. The within-day precision of the reported content for the samples was 1.15-5.08%, and overall precision was 3.97-13.61%. Spike-recovery experiments demonstrated recoveries of over 98%. The results of this SLV were compared to those previously obtained in the MLV and found to be consistent. The translation to rapid resolution LC allowed for an 80% reduction in analysis time and solvent usage, while retaining the accuracy and reliability of the original method. The savings in both cost and time of this rapid method make it well-suited for routine laboratory use.

  19. Normal and polar-organic-phase high-performance liquid chromatographic enantioresolution of omeprazole, rabeprazole, lansoprazole and pantoprazole using monochloro-methylated cellulose-based chiral stationary phase and determination of dexrabeprazole.

    PubMed

    Dixit, Shuchi; Dubey, Rituraj; Bhushan, Ravi

    2014-01-01

    Enantioresolution of four anti-ulcer drugs (chiral sulfoxides), namely, omeprazole, rabeprazole, lansoprazole and pantoprazole, was carried out by high-performance liquid chromatography using a polysaccharide-based chiral stationary phase consisting of monochloromethylated cellulose (Lux cellulose-2) under normal and polar-organic-phase conditions with ultraviolet detection at 285 nm. The method was validated for linearity, accuracy, precision, robustness and limit of detection. The optimized enantioresolution method was compared for both the elution modes. The optimized method was further utilized to check the enantiomeric purity of dexrabeprazole. Copyright © 2013 John Wiley & Sons, Ltd.

  20. CrossSim

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plimpton, Steven J.; Agarwal, Sapan; Schiek, Richard

    2016-09-02

    CrossSim is a simulator for modeling neural-inspired machine learning algorithms on analog hardware, such as resistive memory crossbars. It includes noise models for reading and updating the resistances, which can be based on idealized equations or experimental data. It can also introduce noise and finite precision effects when converting values from digital to analog and vice versa. All of these effects can be turned on or off as an algorithm processes a data set and attempts to learn its salient attributes so that it can be categorized in the machine learning training/classification context. CrossSim thus allows the robustness, accuracy, andmore » energy usage of a machine learning algorithm to be tested on simulated hardware.« less

  1. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  2. A sensitive and selective liquid chromatography/tandem mass spectrometry method for quantitative analysis of efavirenz in human plasma.

    PubMed

    Srivastava, Praveen; Moorthy, Ganesh S; Gross, Robert; Barrett, Jeffrey S

    2013-01-01

    A selective and a highly sensitive method for the determination of the non-nucleoside reverse transcriptase inhibitor (NNRTI), efavirenz, in human plasma has been developed and fully validated based on high performance liquid chromatography tandem mass spectrometry (LC-MS/MS). Sample preparation involved protein precipitation followed by one to one dilution with water. The analyte, efavirenz was separated by high performance liquid chromatography and detected with tandem mass spectrometry in negative ionization mode with multiple reaction monitoring. Efavirenz and ¹³C₆-efavirenz (Internal Standard), respectively, were detected via the following MRM transitions: m/z 314.20243.90 and m/z 320.20249.90. A gradient program was used to elute the analytes using 0.1% formic acid in water and 0.1% formic acid in acetonitrile as mobile phase solvents, at a flow-rate of 0.3 mL/min. The total run time was 5 min and the retention times for the internal standard (¹³C₆-efavirenz) and efavirenz was approximately 2.6 min. The calibration curves showed linearity (coefficient of regression, r>0.99) over the concentration range of 1.0-2,500 ng/mL. The intraday precision based on the standard deviation of replicates of lower limit of quantification (LLOQ) was 9.24% and for quality control (QC) samples ranged from 2.41% to 6.42% and with accuracy from 112% and 100-111% for LLOQ and QC samples. The inter day precision was 12.3% and 3.03-9.18% for LLOQ and quality controls samples, and the accuracy was 108% and 95.2-108% for LLOQ and QC samples. Stability studies showed that efavirenz was stable during the expected conditions for sample preparation and storage. The lower limit of quantification for efavirenz was 1 ng/mL. The analytical method showed excellent sensitivity, precision, and accuracy. This method is robust and is being successfully applied for therapeutic drug monitoring and pharmacokinetic studies in HIV-infected patients.

  3. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Yu, Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge-Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly discussed, and the performance of the algorithm is tested for a number of hardware configurations.

  4. New applications of CRISPR/Cas9 system on mutant DNA detection.

    PubMed

    Jia, Chenqiang; Huai, Cong; Ding, Jiaqi; Hu, Lingna; Su, Bo; Chen, Hongyan; Lu, Daru

    2018-01-30

    The detection of mutant DNA is critical for precision medicine, but low-frequency DNA mutation is very hard to be determined. CRISPR/Cas9 is a robust tool for in vivo gene editing, and shows the potential for precise in vitro DNA cleavage. Here we developed a DNA mutation detection system based on CRISPR/Cas9 that can detect gene mutation efficiently even in a low-frequency condition. The system of CRISPR/Cas9 cleavage in vitro showed a high accuracy similar to traditional T7 endonuclease I (T7E1) assay in estimating mutant DNA proportion in the condition of normal frequency. The technology was further used for low-frequency mutant DNA detection of EGFR and HBB somatic mutations. To the end, Cas9 was employed to cleave the wild-type (WT) DNA and to enrich the mutant DNA. Using amplified fragment length polymorphism analysis (AFLPA) and Sanger sequencing, we assessed the sensitivity of CRISPR/Cas9 cleavage-based PCR, in which mutations at 1%-10% could be enriched and detected. When combined with blocker PCR, its sensitivity reached up to 0.1%. Our results suggested that this new application of CRISPR/Cas9 system is a robust and potential method for heterogeneous specimens in the clinical diagnosis and treatment management. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Stability-Indicating HPLC Determination of Gemcitabine in Pharmaceutical Formulations

    PubMed Central

    Singh, Rahul; Shakya, Ashok K.; Naik, Rajashri; Shalan, Naeem

    2015-01-01

    A simple, sensitive, inexpensive, and rapid stability indicating high performance liquid chromatographic method has been developed for determination of gemcitabine in injectable dosage forms using theophylline as internal standard. Chromatographic separation was achieved on a Phenomenex Luna C-18 column (250 mm × 4.6 mm; 5μ) with a mobile phase consisting of 90% water and 10% acetonitrile (pH 7.00 ± 0.05). The signals of gemcitabine and theophylline were recorded at 275 nm. Calibration curves were linear in the concentration range of 0.5–50 μg/mL. The correlation coefficient was 0.999 or higher. The limit of detection and limit of quantitation were 0.1498 and 0.4541 μg/mL, respectively. The inter- and intraday precision were less than 2%. Accuracy of the method ranged from 100.2% to 100.4%. Stability studies indicate that the drug was stable to sunlight and UV light. The drug gives 6 different hydrolytic products under alkaline stress and 3 in acidic condition. Aqueous and oxidative stress conditions also degrade the drug. Degradation was higher in the alkaline condition compared to other stress conditions. The robustness of the methods was evaluated using design of experiments. Validation reveals that the proposed method is specific, accurate, precise, reliable, robust, reproducible, and suitable for the quantitative analysis. PMID:25838825

  6. Validation of a Method for Cylindrospermopsin Determination in Vegetables: Application to Real Samples Such as Lettuce (Lactuca sativa L.).

    PubMed

    Prieto, Ana I; Guzmán-Guillén, Remedios; Díez-Quijada, Leticia; Campos, Alexandre; Vasconcelos, Vitor; Jos, Ángeles; Cameán, Ana M

    2018-02-01

    Reports on the occurrence of the cyanobacterial toxin cylindrospermopsin (CYN) have increased worldwide because of CYN toxic effects in humans and animals. If contaminated waters are used for plant irrigation, these could represent a possible CYN exposure route for humans. For the first time, a method employing solid phase extraction and quantification by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) of CYN was optimized in vegetables matrices such as lettuce ( Lactuca sativa ). The validated method showed a linear range, from 5 to 500 ng CYN g -1 of fresh weight (f.w.), and detection and quantitation limits (LOD and LOQ) of 0.22 and 0.42 ng CYN g -1 f.w., respectively. The mean recoveries ranged between 85 and 104%, and the intermediate precision from 12.7 to 14.7%. The method showed to be robust for the three different variables tested. Moreover, it was successfully applied to quantify CYN in edible lettuce leaves exposed to CYN-contaminated water (10 µg L -1 ), showing that the tolerable daily intake (TDI) in the case of CYN could be exceeded in elderly high consumers. The validated method showed good results in terms of sensitivity, precision, accuracy, and robustness for CYN determination in leaf vegetables such as lettuce. More studies are needed in order to prevent the risks associated with the consumption of CYN-contaminated vegetables.

  7. Performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from the Cochrane Library.

    PubMed

    Huang, Yuansheng; Yang, Zhirong; Wang, Jing; Zhuo, Lin; Li, Zhixia; Zhan, Siyan

    2016-05-06

    To compare the performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from The Cochrane Library. Databases of CDSR and DARE in the Cochrane Library were searched for systematic reviews of diagnostic test accuracy published between 2008 and 2012 through nine search strategies. Each strategy consists of one group or combination of groups of searching filters about diagnostic test accuracy. Four groups of diagnostic filters were used. The Strategy combing all the filters was used as the reference to determine the sensitivity, precision, and the sensitivity x precision product for another eight Strategies. The reference Strategy retrieved 8029 records, of which 832 were eligible. The strategy only composed of MeSH terms about "accuracy measures" achieved the highest values in both precision (69.71%) and product (52.45%) with a moderate sensitivity (75.24%). The combination of MeSH terms and free text words about "accuracy measures" contributed little to increasing the sensitivity. Strategies composed of filters about "diagnosis" had similar sensitivity but lower precision and product to those composed of filters about "accuracy measures". MeSH term "exp'diagnosis' " achieved the lowest precision (9.78%) and product (7.91%), while its hyponym retrieved only half the number of records at the expense of missing 53 target articles. The precision was negatively correlated with sensitivities among the nine strategies. Compared to the filters about "diagnosis", the filters about "accuracy measures" achieved similar sensitivities but higher precision. When combining both terms, sensitivity of the strategy was enhanced obviously. The combination of MeSH terms and free text words about the same concept seemed to be meaningless for enhancing sensitivity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Simultaneous determination of diclofenac potassium and methocarbamol in ternary mixture with guaifenesin by reversed phase liquid chromatography.

    PubMed

    Elkady, Ehab F

    2010-09-15

    New, simple, rapid and precise reversed phase liquid chromatographic (RP-LC) method has been developed for the simultaneous determination of diclofenac potassium (DP) and methocarbamol (MT) in ternary mixture with guaifenesin (GF), degradation product of methocarbamol. Chromatographic separation was achieved on a Symmetry Waters C18 column (150 mm x 4. 6mm, 5 microm). Gradient elution based on phosphate buffer pH (8)-acetonitrile at a flow rate of 1 mL min(-1) was applied. The UV detector was operated at 282 nm for DP and 274 nm for MT and GF. Linearity, accuracy and precision were found to be acceptable over the concentration ranges of 0.05-16, 0.5-160 and 0.5-160 microg mL(-1) for DP, MT and GF, respectively. The optimized method proved to be specific, robust and accurate for the quality control of the cited drugs in pharmaceutical preparation. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. Multiplexed MRM-Based Protein Quantitation Using Two Different Stable Isotope-Labeled Peptide Isotopologues for Calibration.

    PubMed

    LeBlanc, André; Michaud, Sarah A; Percy, Andrew J; Hardie, Darryl B; Yang, Juncong; Sinclair, Nicholas J; Proudfoot, Jillaine I; Pistawka, Adam; Smith, Derek S; Borchers, Christoph H

    2017-07-07

    When quantifying endogenous plasma proteins for fundamental and biomedical research - as well as for clinical applications - precise, reproducible, and robust assays are required. Targeted detection of peptides in a bottom-up strategy is the most common and precise mass spectrometry-based quantitation approach when combined with the use of stable isotope-labeled peptides. However, when measuring protein in plasma, the unknown endogenous levels prevent the implementation of the best calibration strategies, since no blank matrix is available. Consequently, several alternative calibration strategies are employed by different laboratories. In this study, these methods were compared to a new approach using two different stable isotope-labeled standard (SIS) peptide isotopologues for each endogenous peptide to be quantified, enabling an external calibration curve as well as the quality control samples to be prepared in pooled human plasma without interference from endogenous peptides. This strategy improves the analytical performance of the assay and enables the accuracy of the assay to be monitored, which can also facilitate method development and validation.

  10. Reversed phase HPLC for strontium ranelate: Method development and validation applying experimental design.

    PubMed

    Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád

    2018-06-01

    A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.

  11. Development and Validation of RP-LC Method for the Determination of Cinnarizine/Piracetam and Cinnarizine/Heptaminol Acefyllinate in Presence of Cinnarizine Reported Degradation Products

    PubMed Central

    EL-Houssini, Ola M.; Zawilla, Nagwan H.; Mohammad, Mohammad A.

    2013-01-01

    Specific stability indicating reverse-phase liquid chromatography (RP-LC) assay method (SIAM) was developed for the determination of cinnarizine (Cinn)/piracetam (Pira) and cinnarizine (Cinn)/heptaminol acefyllinate (Hept) in the presence of the reported degradation products of Cinn. A C18 column and gradient mobile phase was applied for good resolution of all peaks. The detection was achieved at 210 nm and 254 nm for Cinn/Pira and Cinn/Hept, respectively. The responses were linear over concentration ranges of 20–200, 20–1000 and 25–1000 μgmL−1 for Cinn, Pira, and Hept respectively. The proposed method was validated for linearity, accuracy, repeatability, intermediate precision, and robustness via statistical analysis of the data. The method was shown to be precise, accurate, reproducible, sensitive, and selective for the analysis of Cinn/Pira and Cinn/Hept in laboratory prepared mixtures and in pharmaceutical formulations. PMID:24137049

  12. Determination of rifampicin in human plasma by high-performance liquid chromatography coupled with ultraviolet detection after automatized solid-liquid extraction.

    PubMed

    Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L

    2016-12-01

    A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Optimization and validation of a method for the determination of the refractive index of milk serum based on the reaction between milk and copper(II) sulfate to detect milk dilutions.

    PubMed

    Rezende, Patrícia Sueli; Carmo, Geraldo Paulo do; Esteves, Eduardo Gonçalves

    2015-06-01

    We report the use of a method to determine the refractive index of copper(II) serum (RICS) in milk as a tool to detect the fraudulent addition of water. This practice is highly profitable, unlawful, and difficult to deter. The method was optimized and validated and is simple, fast and robust. The optimized method yielded statistically equivalent results compared to the reference method with an accuracy of 0.4% and quadrupled analytical throughput. Trueness, precision (repeatability and intermediate precision) and ruggedness are determined to be satisfactory at a 95.45% confidence level. The expanded uncertainty of the measurement was ±0.38°Zeiss at the 95.45% confidence level (k=3.30), corresponding to 1.03% of the minimum measurement expected in adequate samples (>37.00°Zeiss). Copyright © 2015 Elsevier B.V. All rights reserved.

  14. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  15. Quantitative analysis of sitagliptin using the (19)F-NMR method: a universal technique for fluorinated compound detection.

    PubMed

    Zhang, Fen-Fen; Jiang, Meng-Hong; Sun, Lin-Lin; Zheng, Feng; Dong, Lei; Shah, Vishva; Shen, Wen-Bin; Ding, Ya

    2015-01-07

    To expand the application scope of nuclear magnetic resonance (NMR) technology in quantitative analysis of pharmaceutical ingredients, (19)F nuclear magnetic resonance ((19)F-NMR) spectroscopy has been employed as a simple, rapid, and reproducible approach for the detection of a fluorine-containing model drug, sitagliptin phosphate monohydrate (STG). ciprofloxacin (Cipro) has been used as the internal standard (IS). Influential factors, including the relaxation delay time (d1) and pulse angle, impacting the accuracy and precision of spectral data are systematically optimized. Method validation has been carried out in terms of precision and intermediate precision, linearity, limit of detection (LOD) and limit of quantification (LOQ), robustness, and stability. To validate the reliability and feasibility of the (19)F-NMR technology in quantitative analysis of pharmaceutical analytes, the assay result has been compared with that of (1)H-NMR. The statistical F-test and student t-test at 95% confidence level indicate that there is no significant difference between these two methods. Due to the advantages of (19)F-NMR, such as higher resolution and suitability for biological samples, it can be used as a universal technology for the quantitative analysis of other fluorine-containing pharmaceuticals and analytes.

  16. Water vapor δ(2) H, δ(18) O and δ(17) O measurements using an off-axis integrated cavity output spectrometer - sensitivity to water vapor concentration, delta value and averaging-time.

    PubMed

    Tian, Chao; Wang, Lixin; Novick, Kimberly A

    2016-10-15

    High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  18. A newly validated high-performance liquid chromatography method with diode array ultraviolet detection for analysis of the antimalarial drug primaquine in the blood plasma.

    PubMed

    Carmo, Ana Paula Barbosa do; Borborema, Manoella; Ribeiro, Stephan; De-Oliveira, Ana Cecilia Xavier; Paumgartten, Francisco Jose Roma; Moreira, Davyson de Lima

    2017-01-01

    Primaquine (PQ) diphosphate is an 8-aminoquinoline antimalarial drug with unique therapeutic properties. It is the only drug that prevents relapses of Plasmodium vivax or Plasmodium ovale infections. In this study, a fast, sensitive, cost-effective, and robust method for the extraction and high-performance liquid chromatography with diode array ultraviolet detection (HPLC-DAD-UV ) analysis of PQ in the blood plasma was developed and validated. After plasma protein precipitation, PQ was obtained by liquid-liquid extraction and analyzed by HPLC-DAD-UV with a modified-silica cyanopropyl column (250mm × 4.6mm i.d. × 5μm) as the stationary phase and a mixture of acetonitrile and 10mM ammonium acetate buffer (pH = 3.80) (45:55) as the mobile phase. The flow rate was 1.0mL·min-1, the oven temperature was 50OC, and absorbance was measured at 264nm. The method was validated for linearity, intra-day and inter-day precision, accuracy, recovery, and robustness. The detection (LOD) and quantification (LOQ) limits were 1.0 and 3.5ng·mL-1, respectively. The method was used to analyze the plasma of female DBA-2 mice treated with 20mg.kg-1 (oral) PQ diphosphate. By combining a simple, low-cost extraction procedure with a sensitive, precise, accurate, and robust method, it was possible to analyze PQ in small volumes of plasma. The new method presents lower LOD and LOQ limits and requires a shorter analysis time and smaller plasma volumes than those of previously reported HPLC methods with DAD-UV detection. The new validated method is suitable for kinetic studies of PQ in small rodents, including mouse models for the study of malaria.

  19. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  20. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    PubMed

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  1. Adaptive classifier for steel strip surface defects

    NASA Astrophysics Data System (ADS)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li

    2017-01-01

    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  2. Increased reliability of nuclear magnetic resonance protein structures by consensus structure bundles.

    PubMed

    Buchner, Lena; Güntert, Peter

    2015-02-03

    Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Field comparison of several commercially available radon detectors.

    PubMed Central

    Field, R W; Kross, B C

    1990-01-01

    To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851

  4. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  5. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  6. Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements

    PubMed Central

    Arkenbout, Ewout A.; de Winter, Joost C. F.; Breedveld, Paul

    2015-01-01

    Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system. PMID:26694395

  7. Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements.

    PubMed

    Arkenbout, Ewout A; de Winter, Joost C F; Breedveld, Paul

    2015-12-15

    Vision based interfaces for human computer interaction have gained increasing attention over the past decade. This study presents a data fusion approach of the Nimble VR vision based system, using the Kinect camera, with the contact based 5DT Data Glove. Data fusion was achieved through a Kalman filter. The Nimble VR and filter output were compared using measurements performed on (1) a wooden hand model placed in various static postures and orientations; and (2) three differently sized human hands during active finger flexions. Precision and accuracy of joint angle estimates as a function of hand posture and orientation were determined. Moreover, in light of possible self-occlusions of the fingers in the Kinect camera images, data completeness was assessed. Results showed that the integration of the Data Glove through the Kalman filter provided for the proximal interphalangeal (PIP) joints of the fingers a substantial improvement of 79% in precision, from 2.2 deg to 0.9 deg. Moreover, a moderate improvement of 31% in accuracy (being the mean angular deviation from the true joint angle) was established, from 24 deg to 17 deg. The metacarpophalangeal (MCP) joint was relatively unaffected by the Kalman filter. Moreover, the Data Glove increased data completeness, thus providing a substantial advantage over the sole use of the Nimble VR system.

  8. Assessing the Performance of a Network of Low Cost Particulate Matter Sensors Deployed in Sacramento, California

    NASA Astrophysics Data System (ADS)

    Mukherjee, A. D.; Brown, S. G.; McCarthy, M. C.

    2017-12-01

    A new generation of low cost air quality sensors have the potential to provide valuable information on the spatial-temporal variability of air pollution - if the measurements have sufficient quality. This study examined the performance of a particulate matter sensor model, the AirBeam (HabitatMap Inc., Brooklyn, NY), over a three month period in the urban environment of Sacramento, California. Nineteen AirBeam sensors were deployed at a regulatory air monitoring site collocated with meteorology measurements and as a local network over an 80 km2 domain in Sacramento, CA. This study presents the methodology to evaluate the precision, accuracy, and reliability of the sensors over a range of meteorological and aerosol conditions. The sensors demonstrated a robust degree of precision during collocated measurement periods (R2 = 0.98 - 0.99) and a moderate degree of correlation against a Beta Attenuation Monitor PM2.5 monitor (R2 0.6). A normalization correction is applied during the study period so that each AirBeam sensor in the network reports a comparable value. The role of the meteorological environment on the accuracy of the sensor measurements is investigated, along with the possibility of improving the measurements through a meteorology weighted correction. The data quality of the network of sensors is examined, and the spatial variability of particulate matter through the study domain derived from the sensor network is presented.

  9. Stability of suxamethonium in pharmaceutical solution for injection by validated stability-indicating chromatographic method.

    PubMed

    Beck, William; Kabiche, Sofiane; Balde, Issa-Bella; Carret, Sandra; Fontan, Jean-Eudes; Cisternino, Salvatore; Schlatter, Joël

    2016-12-01

    To assess the stability of pharmaceutical suxamethonium (succinylcholine) solution for injection by validated stability-indicating chromatographic method in vials stored at room temperature. The chromatographic assay was achieved by using a detector wavelength set at 218 nm, a C18 column, and an isocratic mobile phase (100% of water) at a flow rate of 0.6 mL/min for 5 minutes. The method was validated according to the International Conference on Harmonization guidelines with respect to the stability-indicating capacity of the method including linearity, limits of detection and quantitation, precision, accuracy, system suitability, robustness, and forced degradations. Linearity was achieved in the concentration range of 5 to 40 mg/mL with a correlation coefficient higher than 0.999. The limits of detection and quantification were 0.8 and 0.9 mg/mL, respectively. The percentage relative standard deviation for intraday (1.3-1.7) and interday (0.1-2.0) precision was found to be less than 2.1%. Accuracy was assessed by the recovery test of suxamethonium from solution for injection (99.5%-101.2%). Storage of suxamethonium solution for injection vials at ambient temperature (22°C-26°C) for 17 days demonstrated that at least 95% of original suxamethonium concentration remained stable. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.

    PubMed

    Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin

    2016-01-01

    First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.

  11. Tightly Coupled Integration of GPS Ambiguity Fixed Precise Point Positioning and MEMS-INS through a Troposphere-Constrained Adaptive Kalman Filter

    PubMed Central

    Han, Houzeng; Xu, Tianhe; Wang, Jian

    2016-01-01

    Precise Point Positioning (PPP) makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF) combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD) operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC) algorithm is implemented by integrating PPP with inertial navigation system (INS) using an Extended Kalman filter (EKF). The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies of the height component are improved by 30.36%, 16.95% and 24.07% for three different convergence times, i.e., 60, 50 and 30 min, respectively. It shows that the ambiguity-fixed horizontal positioning accuracy has been significantly improved. When compared with the conventional PPP solution, it can be seen that position accuracies are improved by 19.51%, 61.11% and 23.53% for the north, east and height components, respectively, after one hour convergence through the troposphere constraint fixed PPP/INS with adaptive covariance model. PMID:27399721

  12. Clinical evaluation of the FreeStyle Precision Pro system.

    PubMed

    Brazg, Ronald; Hughes, Kristen; Martin, Pamela; Coard, Julie; Toffaletti, John; McDonnell, Elizabeth; Taylor, Elizabeth; Farrell, Lausanne; Patel, Mona; Ward, Jeanne; Chen, Ting; Alva, Shridhara; Ng, Ronald

    2013-06-05

    A new version of international standard (ISO 15197) and CLSI Guideline (POCT12) with more stringent accuracy criteria are near publication. We evaluated the glucose test performance of the FreeStyle Precision Pro system, a new blood glucose monitoring system (BGMS) designed to enhance accuracy for point-of-care testing (POCT). Precision, interference and system accuracy with 503 blood samples from capillary, venous and arterial sources were evaluated in a multicenter study. Study results were analyzed and presented in accordance with the specifications and recommendations of the final draft ISO 15197 and the new POCT12. The FreeStyle Precision Pro system demonstrated acceptable precision (CV <5%), no interference across a hematocrit range of 15-65%, and, except for xylose, no interference from 24 of 25 potentially interfering substances. It also met all accuracy criteria specified in the final draft ISO 15197 and POCT12, with 97.3-98.9% of the individual results of various blood sample types agreeing within ±12 mg/dl of the laboratory analyzer values at glucose concentrations <100mg/dl and within ±12.5% of the laboratory analyzer values at glucose concentrations ≥100 mg/dl. The FreeStyle Precision Pro system met the tighter accuracy requirements, providing a means for enhancing accuracy for point-of-care blood glucose monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.

  14. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  15. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  16. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network

    PubMed Central

    Qi, Jun; Liu, Guo-Ping

    2017-01-01

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal. PMID:29113126

  17. Active laser radar (lidar) for measurement of corresponding height and reflectance images

    NASA Astrophysics Data System (ADS)

    Froehlich, Christoph; Mettenleiter, M.; Haertl, F.

    1997-08-01

    For the survey and inspection of environmental objects, a non-tactile, robust and precise imaging of height and depth is the basis sensor technology. For visual inspection,surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent 3D height and 2D reflectance images. The laser radar is an optical-wavelength system, and is comparable to devices built by ERIM, Odetics, and Perceptron, measuring the range between sensor and target surfaces as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to these range sensing devices, the laser radar under consideration is designed for high speed and precise operation in both indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a laser range measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports on design details of the laser radar for surface inspection tasks. It outlines the performance requirements and introduces the measurement principle. The hardware design, including the main modules, such as the laser head, the high frequency unit, the laser beam deflection system, and the digital signal processing unit are discussed.the signal processing unit consists of dedicated signal processors for real-time sensor data preprocessing as well as a sensor computer for high-level image analysis and feature extraction. The paper focuses on performance data of the system, including noise, drift over time, precision, and accuracy with measurements. It discuses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of buildings, monuments and industrial environments are presented. The paper concludes by summarizing results achieved in industrial environments and gives a short outlook to future work.

  18. Combining simplicity with cost-effectiveness: Investigation of potential counterfeit of proton pump inhibitors through simulated formulations using thin-layer chromatography.

    PubMed

    Bhatt, Nejal M; Chavada, Vijay D; Sanyal, Mallika; Shrivastav, Pranav S

    2016-11-18

    A simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for the analysis of proton pump inhibitors (PPIs) and their co-formulated drugs, available as binary combination. Planar chromatographic separation was achieved using a single mobile phase comprising of toluene: iso-propranol: acetone: ammonia 5.0:2.3:2.5:0.2 (v/v/v/v) for the analysis of 14 analytes on aluminium-backed layer of silica gel 60 FG 254 . Densitometric determination of the separated spots was done at 290nm. The method was validated according to ICH guidelines for linearity, precision and accuracy, sensitivity, specificity and robustness. The method showed good linear response for the selected drugs as indicated by the high values of correlation coefficients (≥0.9993). The limit of detection and limit of quantiation were in the range of 6.9-159.2ng/band and 20.8-478.1ng/band respectively for all the analytes. The optimized conditions afforded adequate resolution of each PPI from their co-formulated drugs and provided unambiguous identification of the co-formulated drugs from their homologous retardation factors (hR f ). The only limitation of the method was the inability to separate two PPIs, rabeprazole and lansoprazole from each other. Nevertheless, it is proposed that peak spectra recording and comparison with standard drug spot can be a viable option for assignment of TLC spots. The method performance was assessed by analyzing different laboratory simulated mixtures and some marketed formulations of the selected drugs. The developed method was successfully used to investigate potential counterfeit of PPIs through a series of simulated formulations with good accuracy and precision. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Imaging laser radar for high-speed monitoring of the environment

    NASA Astrophysics Data System (ADS)

    Froehlich, Christoph; Mettenleiter, M.; Haertl, F.

    1998-01-01

    In order to establish mobile robot operations and to realize survey and inspection tasks, robust and precise measurements of the geometry of the 3D environment is the basis sensor technology. For visual inspection, surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent range and reflectance images. The laser radar developed at Zoller + Froehlich (ZF) is an optical-wavelength system measuring the range between sensor and target surface as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to other range sensing devices, the ZF system is designed for high-speed and high- performance operation in real indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a single-point laser measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports details of the laser radar which is designed to cover requirements with medium range applications. It outlines the performance requirements and introduces the two-frequency phase-shift measurement principle. The hardware design of the single-point laser measurement system, including the main modulates, such as the laser head, the high frequency unit and the signal processing unit are discussed in detail. The paper focuses on performance data of the laser radar, including noise, drift over time, precision, and accuracy with measurements. It discusses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of tunnels, buildings, monuments and industrial environments are presented. The paper concludes by summarizing results and gives a short outlook to future work.

  20. Development and validation of high-performance liquid chromatography and high-performance thin-layer chromatography methods for the quantification of khellin in Ammi visnaga seed

    PubMed Central

    Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar

    2015-01-01

    Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890

  1. saltPAD: A New Analytical Tool for Monitoring Salt Iodization in Low Resource Settings

    PubMed Central

    Myers, Nicholas M.; Strydom, Emmerentia Elza; Sweet, James; Sweet, Christopher; Spohrer, Rebecca; Dhansay, Muhammad Ali; Lieberman, Marya

    2016-01-01

    We created a paper test card that measures a common iodizing agent, iodate, in salt. To test the analytical metrics, usability, and robustness of the paper test card when it is used in low resource settings, the South African Medical Research Council and GroundWork performed independent validation studies of the device. The accuracy and precision metrics from both studies were comparable. In the SAMRC study, more than 90% of the test results (n=1704) were correctly classified as corresponding to adequately or inadequately iodized salt. The cards are suitable for market and household surveys to determine whether salt is adequately iodized. Further development of the cards will improve their utility for monitoring salt iodization during production. PMID:29942380

  2. Development and Validation of Stability-Indicating Derivative Spectrophotometric Methods for Determination of Dronedarone Hydrochloride

    NASA Astrophysics Data System (ADS)

    Chadha, R.; Bali, A.

    2016-05-01

    Rapid, sensitive, cost effective and reproducible stability-indicating derivative spectrophotometric methods have been developed for the estimation of dronedarone HCl employing peak-zero (P-0) and peak-peak (P-P) techniques, and their stability-indicating potential assessed in forced degraded solutions of the drug. The methods were validated with respect to linearity, accuracy, precision and robustness. Excellent linearity was observed in concentrations 2-40 μg/ml ( r 2 = 0.9986). LOD and LOQ values for the proposed methods ranged from 0.42-0.46 μg/ml and 1.21-1.27 μg/ml, respectively, and excellent recovery of the drug was obtained in the tablet samples (99.70 ± 0.84%).

  3. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.

    PubMed

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang

    2017-06-28

    Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.

  4. A comprehensive evaluation of the PRESAGE/optical-CT 3D dosimetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakhalkar, H. S.; Adamovics, J.; Ibbott, G.

    2009-01-15

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5x scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H and N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1x3 cm{sup 2}) impinging on the flat circular face of the dosimeter. A repetitiousmore » sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5x commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to {approx}2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in opaqueness with time. However, the relative dose distribution was found to be extremely stable up to 90 h postirradiation indicating excellent temporal stability. Excellent interdosimeter reproducibility was also observed between the four dosimeters. Gamma comparison maps between each dosimeter and the average distribution of all four dosimeters showed full agreement at the 2% difference, 2 mm distance-to-agreement level. Dose readout from the 3D dosimetry system was found to agree better with independent film measurement than with treatment planning system calculations in penumbral regions and was generally accurate to within 2% dose difference and 2 mm distance-to-agreement. In conclusion, these studies demonstrate excellent precision, accuracy, robustness, and reproducibility of the PRESAGE/optical-CT system for relative 3D dosimetry and support its potential integration with the RPC H and N credentialing phantom for IMRT verification.« less

  5. Development of a stability-indicating UPLC method for determining olanzapine and its associated degradation products present in active pharmaceutical ingredients and pharmaceutical dosage forms.

    PubMed

    Krishnaiah, Ch; Vishnu Murthy, M; Kumar, Ramesh; Mukkanti, K

    2011-03-25

    A simple, sensitive and reproducible ultra performance liquid chromatography (UPLC) coupled with a photodiode array detector method was developed for the quantitative determination of olanzapine (OLN) in API and pharmaceutical dosage forms. The method is applicable to the quantification of related substances and assays of drug substances. Chromatographic separation was achieved on Acquity UPLC BEH 100-mm, 2.1-mm, and 1.7-μm C-18 columns, and the gradient eluted within a short runtime, i.e., within 10.0 min. The eluted compounds were monitored at 250 nm, the flow rate was 0.3 mL/min, and the column oven temperature was maintained at 27°C. The resolution of OLN and eight (potential, bi-products and degradation) impurities was greater than 2.0 for all pairs of components. The high correlation coefficient (r(2)>0.9991) values indicated clear correlations between the investigated compound concentrations and their peak areas within the test ranges. The repeatability and intermediate precision, expressed by the RSD, were less than 2.4%. The accuracy and validity of the method were further ascertained by performing recovery studies via a spike method. The accuracy of the method expressed as relative error was satisfactory. No interference was observed from concomitant substances normally added to the tablets. The drug was subjected to the International Conference on Harmonization (ICH)-prescribed hydrolytic, oxidative, photolytic and thermal stress conditions. The performance of the method was validated according to the present ICH guidelines for specificity, limit of detection, limit of quantification, linearity, accuracy, precision, ruggedness and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. HPLC-MS/MS method for dexmedetomidine quantification with Design of Experiments approach: application to pediatric pharmacokinetic study.

    PubMed

    Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta

    2017-02-01

    The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.

  7. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  8. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  9. The challenge associated with the robust computation of meteor velocities from video and photographic records

    NASA Astrophysics Data System (ADS)

    Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.

    2017-09-01

    The CABERNET project was designed to push the limits for obtaining accurate measurements of meteoroids orbits from photographic and video meteor camera recordings. The discrepancy between the measured and theoretic orbits of these objects heavily depends on the semi-major axis determination, and thus on the reliability of the pre-atmospheric velocity computation. With a spatial resolution of 0.01° per pixel and a temporal resolution of up to 10 ms, CABERNET should be able to provide accurate measurements of velocities and trajectories of meteors. To achieve this, it is necessary to improve the precision of the data reduction processes, and especially the determination of the meteor's velocity. In this work, most of the steps of the velocity computation are thoroughly investigated in order to reduce the uncertainties and error contributions at each stage of the reduction process. The accuracy of the measurement of meteor centroids is established and results in a precision of 0.09 pixels for CABERNET, which corresponds to 3.24‧‧. Several methods to compute the velocity were investigated based on the trajectory determination algorithms described in Ceplecha (1987) and Borovicka (1990), as well as the multi-parameter fitting (MPF) method proposed by Gural (2012). In the case of the MPF, many optimization methods were implemented in order to find the most efficient and robust technique to solve the minimization problem. The entire data reduction process is assessed using simulated meteors, with different geometrical configurations and deceleration behaviors. It is shown that the multi-parameter fitting method proposed by Gural(2012)is the most accurate method to compute the pre-atmospheric velocity in all circumstances. Many techniques that assume constant velocity at the beginning of the path as derived from the trajectory determination using Ceplecha (1987) or Borovicka (1990) can lead to large errors for decelerating meteors. The MPF technique also allows one to reliably compute the velocity for very low convergence angles (∼ 1°). Despite the better accuracy of this method, the poor conditioning of the velocity propagation models used in the meteor community and currently employed by the multi-parameter fitting method prevent us from optimally computing the pre-atmospheric velocity. Specifically, the deceleration parameters are particularly difficult to determine. The quality of the data provided by the CABERNET network limits the error induced by this effect to achieve an accuracy of about 1% on the velocity computation. Such a precision would not be achievable with lower resolution camera networks and today's commonly used trajectory reduction algorithms. To improve the performance of the multi-parameter fitting method, a linearly independent deceleration formulation needs to be developed.

  10. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    PubMed

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these models are robust and efficient for detecting epileptic seizure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    PubMed

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-06-01

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Computer-aided detection of human cone photoreceptor inner segments using multi-scale circular voting

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Dubra, Alfredo; Tam, Johnny

    2016-03-01

    Cone photoreceptors are highly specialized cells responsible for the origin of vision in the human eye. Their inner segments can be noninvasively visualized using adaptive optics scanning light ophthalmoscopes (AOSLOs) with nonconfocal split detection capabilities. Monitoring the number of cones can lead to more precise metrics for real-time diagnosis and assessment of disease progression. Cell identification in split detection AOSLO images is hindered by cell regions with heterogeneous intensity arising from shadowing effects and low contrast boundaries due to overlying blood vessels. Here, we present a multi-scale circular voting approach to overcome these challenges through the novel combination of: 1) iterative circular voting to identify candidate cells based on their circular structures, 2) a multi-scale strategy to identify the optimal circular voting response, and 3) clustering to improve robustness while removing false positives. We acquired images from three healthy subjects at various locations on the retina and manually labeled cell locations to create ground-truth for evaluating the detection accuracy. The images span a large range of cell densities. The overall recall, precision, and F1 score were 91±4%, 84±10%, and 87±7% (Mean±SD). Results showed that our method for the identification of cone photoreceptor inner segments performs well even with low contrast cell boundaries and vessel obscuration. These encouraging results demonstrate that the proposed approach can robustly and accurately identify cells in split detection AOSLO images.

  13. Flow Cytometry: Evolution of Microbiological Methods for Probiotics Enumeration.

    PubMed

    Pane, Marco; Allesina, Serena; Amoruso, Angela; Nicola, Stefania; Deidda, Francesca; Mogna, Luca

    2018-05-14

    The purpose of this trial was to verify that the analytical method ISO 19344:2015 (E)-IDF 232:2015 (E) is valid and reliable for quantifying the concentration of the probiotic Lactobacillus rhamnosus GG (ATCC 53103) in a finished product formulation. Flow cytometry assay is emerging as an alternative rapid method for microbial detection, enumeration, and population profiling. The use of flow cytometry not only permits the determination of viable cell counts but also allows for enumeration of damaged and dead cell subpopulations. Results are expressed as TFU (Total Fluorescent Units) and AFU (Active Fluorescent Units). In December 2015, the International Standard ISO 19344-IDF 232 "Milk and milk products-Starter cultures, probiotics and fermented products-Quantification of lactic acid bacteria by flow cytometry" was published. This particular ISO can be applied universally and regardless of the species of interest. Analytical method validation was conducted on 3 different industrial batches of L. rhamnosus GG according to USP39<1225>/ICH Q2R1 in term of: accuracy, precision (repeatability), intermediate precision (ruggedness), specificity, limit of quantification, linearity, range, robustness. The data obtained on the 3 batches of finished product have significantly demonstrated the validity and robustness of the cytofluorimetric analysis. On the basis of the results obtained, the ISO 19344:2015 (E)-IDF 232:2015 (E) "Quantification of lactic acid bacteria by flow cytometry" can be used for the enumeration of L. rhamnosus GG in a finished product formulation.

  14. Validation of a Method for Cylindrospermopsin Determination in Vegetables: Application to Real Samples Such as Lettuce (Lactuca sativa L.)

    PubMed Central

    Prieto, Ana I.; Díez-Quijada, Leticia; Campos, Alexandre; Vasconcelos, Vitor

    2018-01-01

    Reports on the occurrence of the cyanobacterial toxin cylindrospermopsin (CYN) have increased worldwide because of CYN toxic effects in humans and animals. If contaminated waters are used for plant irrigation, these could represent a possible CYN exposure route for humans. For the first time, a method employing solid phase extraction and quantification by ultra-performance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) of CYN was optimized in vegetables matrices such as lettuce (Lactuca sativa). The validated method showed a linear range, from 5 to 500 ng CYN g−1 of fresh weight (f.w.), and detection and quantitation limits (LOD and LOQ) of 0.22 and 0.42 ng CYN g−1 f.w., respectively. The mean recoveries ranged between 85 and 104%, and the intermediate precision from 12.7 to 14.7%. The method showed to be robust for the three different variables tested. Moreover, it was successfully applied to quantify CYN in edible lettuce leaves exposed to CYN-contaminated water (10 µg L−1), showing that the tolerable daily intake (TDI) in the case of CYN could be exceeded in elderly high consumers. The validated method showed good results in terms of sensitivity, precision, accuracy, and robustness for CYN determination in leaf vegetables such as lettuce. More studies are needed in order to prevent the risks associated with the consumption of CYN-contaminated vegetables. PMID:29389882

  15. A Note on "Accuracy" and "Precision"

    ERIC Educational Resources Information Center

    Stallings, William M.; Gillmore, Gerald M.

    1971-01-01

    Advocates the use of precision" rather than accuracy" in defining reliability. These terms are consistently differentiated in certain sciences. Review of psychological and measurement literature reveals, however, interchangeable usage of the terms in defining reliability. (Author/GS)

  16. 40 CFR 92.105 - General equipment specifications.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accuracy and precision of 0.1 percent of absolute pressure at point or better. (2) Gauges and transducers used to measure any other pressures shall have an accuracy and precision of 1 percent of absolute...

  17. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  18. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  19. Automated Sperm Head Detection Using Intersecting Cortical Model Optimised by Particle Swarm Optimization.

    PubMed

    Tan, Weng Chun; Mat Isa, Nor Ashidi

    2016-01-01

    In human sperm motility analysis, sperm segmentation plays an important role to determine the location of multiple sperms. To ensure an improved segmentation result, the Laplacian of Gaussian filter is implemented as a kernel in a pre-processing step before applying the image segmentation process to automatically segment and detect human spermatozoa. This study proposes an intersecting cortical model (ICM), which was derived from several visual cortex models, to segment the sperm head region. However, the proposed method suffered from parameter selection; thus, the ICM network is optimised using particle swarm optimization where feature mutual information is introduced as the new fitness function. The final results showed that the proposed method is more accurate and robust than four state-of-the-art segmentation methods. The proposed method resulted in rates of 98.14%, 98.82%, 86.46% and 99.81% in accuracy, sensitivity, specificity and precision, respectively, after testing with 1200 sperms. The proposed algorithm is expected to be implemented in analysing sperm motility because of the robustness and capability of this algorithm.

  20. A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery

    PubMed Central

    2018-01-01

    Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366

  1. High performance liquid chromatography for simultaneous determination of xipamide, triamterene and hydrochlorothiazide in bulk drug samples and dosage forms.

    PubMed

    Abd El-Hay, Soad S; Hashem, Hisham; Gouda, Ayman A

    2016-03-01

    A novel, simple and robust high-performance liquid chromatography (HPLC) method was developed and validated for simultaneous determination of xipamide (XIP), triamterene (TRI) and hydrochlorothiazide (HCT) in their bulk powders and dosage forms. Chromatographic separation was carried out in less than two minutes. The separation was performed on a RP C-18 stationary phase with an isocratic elution system consisting of 0.03 mol L(-1) orthophosphoric acid (pH 2.3) and acetonitrile (ACN) as the mobile phase in the ratio of 50:50, at 2.0 mL min(-1) flow rate at room temperature. Detection was performed at 220 nm. Validation was performed concerning system suitability, limits of detection and quantitation, accuracy, precision, linearity and robustness. Calibration curves were rectilinear over the range of 0.195-100 μg mL(-1) for all the drugs studied. Recovery values were 99.9, 99.6 and 99.0 % for XIP, TRI and HCT, respectively. The method was applied to simultaneous determination of the studied analytes in their pharmaceutical dosage forms.

  2. Robust estimation of adaptive tensors of curvature by tensor voting.

    PubMed

    Tong, Wai-Shun; Tang, Chi-Keung

    2005-03-01

    Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.

  3. Robust non-rigid registration algorithm based on local affine registration

    NASA Astrophysics Data System (ADS)

    Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng

    2018-04-01

    Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.

  4. Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs

    PubMed Central

    Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie

    2017-01-01

    Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582

  5. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration: In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.

  6. Using Public Data for Comparative Proteome Analysis in Precision Medicine Programs.

    PubMed

    Hughes, Christopher S; Morin, Gregg B

    2018-03-01

    Maximizing the clinical utility of information obtained in longitudinal precision medicine programs would benefit from robust comparative analyses to known information to assess biological features of patient material toward identifying the underlying features driving their disease phenotype. Herein, the potential for utilizing publically deposited mass-spectrometry-based proteomics data to perform inter-study comparisons of cell-line or tumor-tissue materials is investigated. To investigate the robustness of comparison between MS-based proteomics studies carried out with different methodologies, deposited data representative of label-free (MS1) and isobaric tagging (MS2 and MS3 quantification) are utilized. In-depth quantitative proteomics data acquired from analysis of ovarian cancer cell lines revealed the robust recapitulation of observable gene expression dynamics between individual studies carried out using significantly different methodologies. The observed signatures enable robust inter-study clustering of cell line samples. In addition, the ability to classify and cluster tumor samples based on observed gene expression trends when using a single patient sample is established. With this analysis, relevant gene expression dynamics are obtained from a single patient tumor, in the context of a precision medicine analysis, by leveraging a large cohort of repository data as a comparator. Together, these data establish the potential for state-of-the-art MS-based proteomics data to serve as resources for robust comparative analyses in precision medicine applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. PET guidance for liver radiofrequency ablation: an evaluation

    NASA Astrophysics Data System (ADS)

    Lei, Peng; Dandekar, Omkar; Mahmoud, Faaiza; Widlus, David; Malloy, Patrick; Shekhar, Raj

    2007-03-01

    Radiofrequency ablation (RFA) is emerging as the primary mode of treatment of unresectable malignant liver tumors. With current intraoperative imaging modalities, quick, precise, and complete localization of lesions remains a challenge for liver RFA. Fusion of intraoperative CT and preoperative PET images, which relies on PET and CT registration, can produce a new image with complementary metabolic and anatomic data and thus greatly improve the targeting accuracy. Unlike neurological images, alignment of abdominal images by combined PET/CT scanner is prone to errors as a result of large nonrigid misalignment in abdominal images. Our use of a normalized mutual information-based 3D nonrigid registration technique has proven powerful for whole-body PET and CT registration. We demonstrate here that this technique is capable of acceptable abdominal PET and CT registration as well. In five clinical cases, both qualitative and quantitative validation showed that the registration is robust and accurate. Quantitative accuracy was evaluated by comparison between the result from the algorithm and clinical experts. The accuracy of registration is much less than the allowable margin in liver RFA. Study findings show the technique's potential to enable the augmentation of intraoperative CT with preoperative PET to reduce procedure time, avoid repeating procedures, provide clinicians with complementary functional/anatomic maps, avoid omitting dispersed small lesions, and improve the accuracy of tumor targeting in liver RFA.

  8. Slice profile and B1 corrections in 2D magnetic resonance fingerprinting.

    PubMed

    Ma, Dan; Coppo, Simone; Chen, Yong; McGivney, Debra F; Jiang, Yun; Pahwa, Shivani; Gulani, Vikas; Griswold, Mark A

    2017-11-01

    The goal of this study is to characterize and improve the accuracy of 2D magnetic resonance fingerprinting (MRF) scans in the presence of slice profile (SP) and B 1 imperfections, which are two main factors that affect quantitative results in MRF. The SP and B 1 imperfections are characterized and corrected separately. The SP effect is corrected by simulating the radiofrequency pulse in the dictionary, and the B 1 is corrected by acquiring a B 1 map using the Bloch-Siegert method before each scan. The accuracy, precision, and repeatability of the proposed method are evaluated in phantom studies. The effects of both SP and B 1 imperfections are also illustrated and corrected in the in vivo studies. The SP and B 1 corrections improve the accuracy of the T 1 and T 2 values, independent of the shape of the radiofrequency pulse. The T 1 and T 2 values obtained from different excitation patterns become more consistent after corrections, which leads to an improvement of the robustness of the MRF design. This study demonstrates that MRF is sensitive to both SP and B 1 effects, and that corrections can be made to improve the accuracy of MRF with only a 2-s increase in acquisition time. Magn Reson Med 78:1781-1789, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Validation of the sperm class analyser CASA system for sperm counting in a busy diagnostic semen analysis laboratory.

    PubMed

    Dearing, Chey G; Kilburn, Sally; Lindsay, Kevin S

    2014-03-01

    Sperm counts have been linked to several fertility outcomes making them an essential parameter of semen analysis. It has become increasingly recognised that Computer-Assisted Semen Analysis (CASA) provides improved precision over manual methods but that systems are seldom validated robustly for use. The objective of this study was to gather the evidence to validate or reject the Sperm Class Analyser (SCA) as a tool for routine sperm counting in a busy laboratory setting. The criteria examined were comparison with the Improved Neubauer and Leja 20-μm chambers, within and between field precision, sperm concentration linearity from a stock diluted in semen and media, accuracy against internal and external quality material, assessment of uneven flow effects and a receiver operating characteristic (ROC) analysis to predict fertility in comparison with the Neubauer method. This work demonstrates that SCA CASA technology is not a standalone 'black box', but rather a tool for well-trained staff that allows rapid, high-number sperm counting providing errors are identified and corrected. The system will produce accurate, linear, precise results, with less analytical variance than manual methods that correlate well against the Improved Neubauer chamber. The system provides superior predictive potential for diagnosing fertility problems.

  10. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  11. Research on the impact factors of GRACE precise orbit determination by dynamic method

    NASA Astrophysics Data System (ADS)

    Guo, Nan-nan; Zhou, Xu-hua; Li, Kai; Wu, Bin

    2018-07-01

    With the successful use of GPS-only-based POD (precise orbit determination), more and more satellites carry onboard GPS receivers to support their orbit accuracy requirements. It provides continuous GPS observations in high precision, and becomes an indispensable way to obtain the orbit of LEO satellites. Precise orbit determination of LEO satellites plays an important role for the application of LEO satellites. Numerous factors should be considered in the POD processing. In this paper, several factors that impact precise orbit determination are analyzed, namely the satellite altitude, the time-variable earth's gravity field, the GPS satellite clock error and accelerometer observation. The GRACE satellites provide ideal platform to study the performance of factors for precise orbit determination using zero-difference GPS data. These factors are quantitatively analyzed on affecting the accuracy of dynamic orbit using GRACE observations from 2005 to 2011 by SHORDE software. The study indicates that: (1) with the altitude of the GRACE satellite is lowered from 480 km to 460 km in seven years, the 3D (three-dimension) position accuracy of GRACE satellite orbit is about 3˜4 cm based on long spans data; (2) the accelerometer data improves the 3D position accuracy of GRACE in about 1 cm; (3) the accuracy of zero-difference dynamic orbit is about 6 cm with the GPS satellite clock error products in 5 min sampling interval and can be raised to 4 cm, if the GPS satellite clock error products with 30 s sampling interval can be adopted. (4) the time-variable part of earth gravity field model improves the 3D position accuracy of GRACE in about 0.5˜1.5 cm. Based on this study, we quantitatively analyze the factors that affect precise orbit determination of LEO satellites. This study plays an important role to improve the accuracy of LEO satellites orbit determination.

  12. Precision and accuracy of luminescence lifetime-based phosphor thermometry: A case study of Eu(III):YSZ

    NASA Astrophysics Data System (ADS)

    Heeg, B.; Jenkins, T. P.

    2013-09-01

    Laser induced phosphor thermometry as a reliable technique requires an analysis of factors controlling or contributing to the precision and accuracy of a measurement. In this paper, we discuss several critical design parameters in the development of luminescence lifetime-based phosphor thermometry instrumentation for use at elevated temperatures such as encountered in hot sections of gas turbine engines. As precision is predominantly governed by signal and background photon shot noise and detector noise, a brief summary is presented of how these noise contributions may affect the measurement. Accuracy, on the other hand, is governed by a range of effects including, but not limited to, detector response characteristics, laser-induced effects, the photo-physics of the sensor materials, and also the method of data reduction. The various possible outcomes of measurement precision and accuracy are discussed with luminescence lifetime measurements on Eu(III):YSZ sensor coatings.

  13. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  14. Validation of HPLC and UV spectrophotometric methods for the determination of meropenem in pharmaceutical dosage form.

    PubMed

    Mendez, Andreas S L; Steppe, Martin; Schapoval, Elfrides E S

    2003-12-04

    A high-performance liquid chromatographic method and a UV spectrophotometric method for the quantitative determination of meropenem, a highly active carbapenem antibiotic, in powder for injection were developed in present work. The parameters linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. Chromatography was carried out by reversed-phase technique on an RP-18 column with a mobile phase composed of 30 mM monobasic phosphate buffer and acetonitrile (90:10; v/v), adjusted to pH 3.0 with orthophosphoric acid. The UV spectrophotometric method was performed at 298 nm. The samples were prepared in water and the stability of meropenem in aqueous solution at 4 and 25 degrees C was studied. The results were satisfactory with good stability after 24 h at 4 degrees C. Statistical analysis by Student's t-test showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and can be used for the reliable quantitation of meropenem in pharmaceutical dosage form.

  15. Development and validation of liquid chromatographic and UV derivative spectrophotometric methods for the determination of famciclovir in pharmaceutical dosage forms.

    PubMed

    Srinubabu, Gedela; Sudharani, Batchu; Sridhar, Lade; Rao, Jvln Seshagiri

    2006-06-01

    A high-performance liquid chromatographic method and a UV derivative spectrophotometric method for the determination of famciclovir, a highly active antiviral agent, in tablets were developed in the present work. The various parameters, such as linearity, precision, accuracy, specificity, robustness, limit of detection and limit of quantitation were studied according to International Conference on Harmonization guidelines. HPLC was carried out by using the reversed-phase technique on an RP-18 column with a mobile phase composed of 50 mM monobasic phosphate buffer and methanol (50 : 50; v/v), adjusted to pH 3.05 with orthophosphoric acid. The mobile phase was pumped at a flow rate of 1 ml/min and detection was made at 242 nm with UV dual absorbance detector. The first derivative UV spectrophotometric method was performed at 226.5 nm. Statistical analysis was done by Student's t-test and F-test, which showed no significant difference between the results obtained by the two methods. The proposed methods are highly sensitive, precise and accurate and therefore can be used for its Intended purpose.

  16. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  17. Ring Image Analyzer

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.

    2012-01-01

    Ring Image Analyzer software analyzes images to recognize elliptical patterns. It determines the ellipse parameters (axes ratio, centroid coordinate, tilt angle). The program attempts to recognize elliptical fringes (e.g., Newton Rings) on a photograph and determine their centroid position, the short-to-long-axis ratio, and the angle of rotation of the long axis relative to the horizontal direction on the photograph. These capabilities are important in interferometric imaging and control of surfaces. In particular, this program has been developed and applied for determining the rim shape of precision-machined optical whispering gallery mode resonators. The program relies on a unique image recognition algorithm aimed at recognizing elliptical shapes, but can be easily adapted to other geometric shapes. It is robust against non-elliptical details of the image and against noise. Interferometric analysis of precision-machined surfaces remains an important technological instrument in hardware development and quality analysis. This software automates and increases the accuracy of this technique. The software has been developed for the needs of an R&TD-funded project and has become an important asset for the future research proposal to NASA as well as other agencies.

  18. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form.

    PubMed

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S

    2011-07-01

    To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.

  19. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  20. Evaluation and selection of anatomic sites for magnetic resonance imaging-guided mild hyperthermia therapy: a healthy volunteer study.

    PubMed

    V V N Kothapalli, Satya; Altman, Michael B; Zhu, Lifei; Partanen, Ari; Cheng, Galen; Gach, H Michael; Straube, William; Zoberi, Imran; Hallahan, Dennis E; Chen, Hong

    2018-01-04

    Since mild hyperthermia therapy (MHT) requires maintaining the temperature within a narrow window (e.g. 40-43 °C) for an extended duration (up to 1 h), accurate and precise temperature measurements are essential for ensuring safe and effective treatment. This study evaluated the precision and accuracy of MR thermometry in healthy volunteers at different anatomical sites for long scan times. A proton resonance frequency shift method was used for MR thermometry. Eight volunteers were subjected to a 5-min scanning protocol, targeting chest wall, bladder wall, and leg muscles. Six volunteers were subjected to a 30-min scanning protocol and three volunteers were subjected to a 60-min scanning protocol, both targeting the leg muscles. The precision and accuracy of the MR thermometry were quantified. Both the mean precision and accuracy <1 °C were used as criteria for acceptable thermometry. Drift-corrected MR thermometry measurements based on 5-min scans of the chest wall, bladder wall, and leg muscles had accuracies of 1.41 ± 0.65, 1.86 ± 1.20, and 0.34 ± 0.44 °C, and precisions of 2.30 ± 1.21, 1.64 ± 0.56, and 0.48 ± 0.05 °C, respectively. Measurements based on 30-min scans of the leg muscles had accuracy and precision of 0.56 ± 0.05 °C and 0.42 ± 0.50 °C, respectively, while the 60-min scans had accuracy and precision of 0.49 ± 0.03 °C and 0.56 ± 0.05 °C, respectively. Respiration, cardiac, and digestive-related motion pose challenges to MR thermometry of the chest wall and bladder wall. The leg muscles had satisfactory temperature accuracy and precision per the chosen criteria. These results indicate that extremity locations may be preferable targets for MR-guided MHT using the existing MR thermometry technique.

  1. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  2. Validated spectrofluorimetric methods for the determination of apixaban and tirofiban hydrochloride in pharmaceutical formulations.

    PubMed

    El-Bagary, Ramzia I; Elkady, Ehab F; Farid, Naira A; Youssef, Nadia F

    2017-03-05

    Apixaban and Tirofiban Hydrochloride are low molecular weight anticoagulants. The two drugs exhibit native fluorescence that allow the development of simple and valid spectrofluorimetric methods for the determination of Apixaban at λ ex/λ em=284/450nm and tirofiban HCl at λ ex/λ em=227/300nm in aqueous media. Different experimental parameters affecting fluorescence intensities were carefully studied and optimized. The fluorescence intensity-concentration plots were linear over the ranges of 0.2-6μgml -1 for apixaban and 0.2-5μgml -1 for tirofiban HCl. The limits of detection were 0.017 and 0.019μgml -1 and quantification limits were 0.057 and 0.066μgml -1 for apixaban and tirofiban HCl, respectively. The fluorescence quantum yield of apixaban and tirofiban were calculated with values of 0.43 and 0.49. Method validation was evaluated for linearity, specificity, accuracy, precision and robustness as per ICH guidelines. The proposed spectrofluorimetric methods were successfully applied for the determination of apixaban in Eliquis tablets and tirofiban HCl in Aggrastat intravenous infusion. Tolerance ratio was tested to study the effect of foreign interferences from dosage forms excipients. Using Student's t and F tests, revealed no statistically difference between the developed spectrofluorimetric methods and the comparison methods regarding the accuracy and precision, so can be contributed to the analysis of apixaban and tirofiban HCl in QC laboratories as an alternative method. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Determination of calcium, copper, iron, magnesium, manganese, potassium, phosphorus, sodium, and zinc in fortified food products by microwave digestion and inductively coupled plasma-optical emission spectrometry: single-laboratory validation and ring trial.

    PubMed

    Poitevin, Eric

    2012-01-01

    A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-optical emission spectrometry in order to modernize AOAC Official Method 984.27. The improvements involved extension of the scope to all food matrixes (including infant formula), optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed- or open-vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proven through a successful RT using experienced independent food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD, and HorRat values) regarding SLVs and RTs. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an extended updated version of AOAC Official Method 984.27 for fortified food products, including infant formula.

  4. Simultaneous densitometric determination of anthelmintic drug albendazole and its metabolite albendazole sulfoxide by HPTLC in human plasma and pharmaceutical formulations.

    PubMed

    Pandya, Jui J; Sanyal, Mallika; Shrivastav, Pranav S

    2017-09-01

    A new, simple, accurate and precise high-performance thin-layer chromatographic method has been developed and validated for simultaneous determination of an anthelmintic drug, albendazole, and its active metabolite albendazole, sulfoxide. Planar chromatographic separation was performed on aluminum-backed layer of silica gel 60G F 254 using a mixture of toluene-acetonitrile-glacial acetic acid (7.0:2.9:0.1, v/v/v) as the mobile phase. For quantitation, the separated spots were scanned densitometrically at 225 nm. The retention factors (R f ) obtained under the established conditions were 0.76 ± 0.01 and 0.50 ± 0.01 and the regression plots were linear (r 2  ≥ 0.9997) in the concentration ranges 50-350 and 100-700 ng/band for albendazole and albendazole sulfoxide, respectively. The method was validated for linearity, specificity, accuracy (recovery) and precision, repeatability, stability and robustness. The limit of detection and limit of quantitation found were 9.84 and 29.81 ng/band for albendazole and 21.60 and 65.45 ng/band for albendazole sulfoxide, respectively. For plasma samples, solid-phase extraction of analytes yielded mean extraction recoveries of 87.59 and 87.13% for albendazole and albendazole sulfoxide, respectively. The method was successfully applied for the analysis of albendazole in pharmaceutical formulations with accuracy ≥99.32%. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application.

    PubMed

    Wang, C; Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G; Barnett, M H

    2016-07-01

    Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Improvement of AOAC Official Method 984.27 for the determination of nine nutritional elements in food products by Inductively coupled plasma-atomic emission spectroscopy after microwave digestion: single-laboratory validation and ring trial.

    PubMed

    Poitevin, Eric; Nicolas, Marine; Graveleau, Laetitia; Richoz, Janique; Andrey, Daniel; Monard, Florence

    2009-01-01

    A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-atomic emission spectroscopy in order to improve and update AOAC Official Method 984.27. The improvements involved optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed or open vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proved through a successful internal RT using experienced food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD and HorRat values) regarding SLV and RT. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an improved version of AOAC Official Method 984.27 for fortified food products, including infant formula.

  7. A Sensitive and Selective Liquid Chromatography/Tandem Mass Spectrometry Method for Quantitative Analysis of Efavirenz in Human Plasma

    PubMed Central

    Srivastava, Praveen; Moorthy, Ganesh S.; Gross, Robert; Barrett, Jeffrey S.

    2013-01-01

    A selective and a highly sensitive method for the determination of the non-nucleoside reverse transcriptase inhibitor (NNRTI), efavirenz, in human plasma has been developed and fully validated based on high performance liquid chromatography tandem mass spectrometry (LC–MS/MS). Sample preparation involved protein precipitation followed by one to one dilution with water. The analyte, efavirenz was separated by high performance liquid chromatography and detected with tandem mass spectrometry in negative ionization mode with multiple reaction monitoring. Efavirenz and 13C6-efavirenz (Internal Standard), respectively, were detected via the following MRM transitions: m/z 314.20243.90 and m/z 320.20249.90. A gradient program was used to elute the analytes using 0.1% formic acid in water and 0.1% formic acid in acetonitrile as mobile phase solvents, at a flow-rate of 0.3 mL/min. The total run time was 5 min and the retention times for the internal standard (13C6-efavirenz) and efavirenz was approximately 2.6 min. The calibration curves showed linearity (coefficient of regression, r>0.99) over the concentration range of 1.0–2,500 ng/mL. The intraday precision based on the standard deviation of replicates of lower limit of quantification (LLOQ) was 9.24% and for quality control (QC) samples ranged from 2.41% to 6.42% and with accuracy from 112% and 100–111% for LLOQ and QC samples. The inter day precision was 12.3% and 3.03–9.18% for LLOQ and quality controls samples, and the accuracy was 108% and 95.2–108% for LLOQ and QC samples. Stability studies showed that efavirenz was stable during the expected conditions for sample preparation and storage. The lower limit of quantification for efavirenz was 1 ng/mL. The analytical method showed excellent sensitivity, precision, and accuracy. This method is robust and is being successfully applied for therapeutic drug monitoring and pharmacokinetic studies in HIV-infected patients. PMID:23755102

  8. Isotope ratios of trace elements in samples from human nutrition studies determined by TIMS and ICP-MS: precision and accuracy compared.

    PubMed

    Turnlund, Judith R; Keyes, William R

    2002-09-01

    Stable isotopes are used with increasing frequency to trace the metabolic fate of minerals in human nutrition studies. The precision of the analytical methods used must be sufficient to permit reliable measurement of low enrichments and the accuracy should permit comparisons between studies. Two methods most frequently used today are thermal ionization mass spectrometry (TIMS) and inductively coupled plasma mass spectrometry (ICP-MS). This study was conducted to compare the two methods. Multiple natural samples of copper, zinc, molybdenum, and magnesium were analyzed by both methods to compare their internal and external precision. Samples with a range of isotopic enrichments that were collected from human studies or prepared from standards were analyzed to compare their accuracy. TIMS was more precise and accurate than ICP-MS. However, the cost, ease, and speed of analysis were better for ICP-MS. Therefore, for most purposes, ICP-MS is the method of choice, but when the highest degrees of precision and accuracy are required and when enrichments are very low, TIMS is the method of choice.

  9. Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions.

    PubMed

    Tang, Jun; Zhang, Nan; Li, Dalin; Wang, Fei; Zhang, Binzhen; Wang, Chenguang; Shen, Chong; Ren, Jianbin; Xue, Chenyang; Liu, Jun

    2016-07-11

    A novel method based on Pulse Coupled Neural Network(PCNN) algorithm for the highly accurate and robust compass information calculation from the polarized skylight imaging is proposed,which showed good accuracy and reliability especially under cloudy weather,surrounding shielding and moon light. The degree of polarization (DOP) combined with the angle of polarization (AOP), calculated from the full sky polarization image, were used for the compass information caculation. Due to the high sensitivity to the environments, DOP was used to judge the destruction of polarized information using the PCNN algorithm. Only areas with high accuracy of AOP were kept after the DOP PCNN filtering, thereby greatly increasing the compass accuracy and robustness. From the experimental results, it was shown that the compass accuracy was 0.1805° under clear weather. This method was also proven to be applicable under conditions of shielding by clouds, trees and buildings, with a compass accuracy better than 1°. With weak polarization information sources, such as moonlight, this method was shown experimentally to have an accuracy of 0.878°.

  10. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  11. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  12. Precision, accuracy, and efficiency of four tools for measuring soil bulk density or strength.

    Treesearch

    Richard E. Miller; John Hazard; Steven Howes

    2001-01-01

    Monitoring soil compaction is time consuming. A desire for speed and lower costs, however, must be balanced with the appropriate precision and accuracy required of the monitoring task. We compared three core samplers and a cone penetrometer for measuring soil compaction after clearcut harvest on a stone-free and a stony soil. Precision (i.e., consistency) of each tool...

  13. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  14. Multi-beam range imager for autonomous operations

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.

    1993-01-01

    For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.

  15. Smartphone-Based Real-Time Indoor Location Tracking With 1-m Precision.

    PubMed

    Liang, Po-Chou; Krause, Paul

    2016-05-01

    Monitoring the activities of daily living of the elderly at home is widely recognized as useful for the detection of new or deteriorating health conditions. However, the accuracy of existing indoor location tracking systems remains unsatisfactory. The aim of this study was, therefore, to develop a localization system that can identify a patient's real-time location in a home environment with maximum estimation error of 2 m at a 95% confidence level. A proof-of-concept system based on a sensor fusion approach was built with considerations for lower cost, reduced intrusiveness, and higher mobility, deployability, and portability. This involved the development of both a step detector using the accelerometer and compass of an iPhone 5, and a radio-based localization subsystem using a Kalman filter and received signal strength indication to tackle issues that had been identified as limiting accuracy. The results of our experiments were promising with an average estimation error of 0.47 m. We are confident that with the proposed future work, our design can be adapted to a home-like environment with a more robust localization solution.

  16. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  17. iXora: exact haplotype inferencing and trait association.

    PubMed

    Utro, Filippo; Haiminen, Niina; Livingstone, Donald; Cornejo, Omar E; Royaert, Stefan; Schnell, Raymond J; Motamayor, Juan Carlos; Kuhn, David N; Parida, Laxmi

    2013-06-06

    We address the task of extracting accurate haplotypes from genotype data of individuals of large F1 populations for mapping studies. While methods for inferring parental haplotype assignments on large F1 populations exist in theory, these approaches do not work in practice at high levels of accuracy. We have designed iXora (Identifying crossovers and recombining alleles), a robust method for extracting reliable haplotypes of a mapping population, as well as parental haplotypes, that runs in linear time. Each allele in the progeny is assigned not just to a parent, but more precisely to a haplotype inherited from the parent. iXora shows an improvement of at least 15% in accuracy over similar systems in literature. Furthermore, iXora provides an easy-to-use, comprehensive environment for association studies and hypothesis checking in populations of related individuals. iXora provides detailed resolution in parental inheritance, along with the capability of handling very large populations, which allows for accurate haplotype extraction and trait association. iXora is available for non-commercial use from http://researcher.ibm.com/project/3430.

  18. Superpixel-based graph cuts for accurate stereo matching

    NASA Astrophysics Data System (ADS)

    Feng, Liting; Qin, Kaihuai

    2017-06-01

    Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.

  19. Ultrasonic Multiple-Access Ranging System Using Spread Spectrum and MEMS Technology for Indoor Localization

    PubMed Central

    Segers, Laurent; Tiete, Jelmer; Braeken, An; Touhafi, Abdellah

    2014-01-01

    Indoor localization of persons and objects poses a great engineering challenge. Previously developed localization systems demonstrate the use of wideband techniques in ultrasound ranging systems. Direct sequence and frequency hopping spread spectrum ultrasound signals have been proven to achieve a high level of accuracy. A novel ranging method using the frequency hopping spread spectrum with finite impulse response filtering will be investigated and compared against the direct sequence spread spectrum. In the first setup, distances are estimated in a single-access environment, while in the second setup, two senders and one receiver are used. During the experiments, the micro-electromechanical systems are used as ultrasonic sensors, while the senders were implemented using field programmable gate arrays. Results show that in a single-access environment, the direct sequence spread spectrum method offers slightly better accuracy and precision performance compared to the frequency hopping spread spectrum. When two senders are used, measurements point out that the frequency hopping spread spectrum is more robust to near-far effects than the direct sequence spread spectrum. PMID:24553084

  20. Application of Central Upwind Scheme for Solving Special Relativistic Hydrodynamic Equations

    PubMed Central

    Yousaf, Muhammad; Ghaffar, Tayabia; Qamar, Shamsul

    2015-01-01

    The accurate modeling of various features in high energy astrophysical scenarios requires the solution of the Einstein equations together with those of special relativistic hydrodynamics (SRHD). Such models are more complicated than the non-relativistic ones due to the nonlinear relations between the conserved and state variables. A high-resolution shock-capturing central upwind scheme is implemented to solve the given set of equations. The proposed technique uses the precise information of local propagation speeds to avoid the excessive numerical diffusion. The second order accuracy of the scheme is obtained with the use of MUSCL-type initial reconstruction and Runge-Kutta time stepping method. After a discussion of the equations solved and of the techniques employed, a series of one and two-dimensional test problems are carried out. To validate the method and assess its accuracy, the staggered central and the kinetic flux-vector splitting schemes are also applied to the same model. The scheme is robust and efficient. Its results are comparable to those obtained from the sophisticated algorithms, even in the case of highly relativistic two-dimensional test problems. PMID:26070067

  1. False star detection and isolation during star tracking based on improved chi-square tests.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Yang, Yanqiang; Su, Guohua

    2017-08-01

    The star sensor is a precise attitude measurement device for a spacecraft. Star tracking is the main and key working mode for a star sensor. However, during star tracking, false stars become an inevitable interference for star sensor applications, which may result in declined measurement accuracy. A false star detection and isolation algorithm in star tracking based on improved chi-square tests is proposed in this paper. Two estimations are established based on a Kalman filter and a priori information, respectively. The false star detection is operated through adopting the global state chi-square test in a Kalman filter. The false star isolation is achieved using a local state chi-square test. Semi-physical experiments under different trajectories with various false stars are designed for verification. Experiment results show that various false stars can be detected and isolated from navigation stars during star tracking, and the attitude measurement accuracy is hardly influenced by false stars. The proposed algorithm is proved to have an excellent performance in terms of speed, stability, and robustness.

  2. Achieving optimum diffraction based overlay performance

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Laidler, David; Cheng, Shaunee; Coogans, Martyn; Fuchs, Andreas; Ponomarenko, Mariya; van der Schaar, Maurits; Vanoppen, Peter

    2010-03-01

    Diffraction Based Overlay (DBO) metrology has been shown to have significantly reduced Total Measurement Uncertainty (TMU) compared to Image Based Overlay (IBO), primarily due to having no measurable Tool Induced Shift (TIS). However, the advantages of having no measurable TIS can be outweighed by increased susceptibility to WIS (Wafer Induced Shift) caused by target damage, process non-uniformities and variations. The path to optimum DBO performance lies in having well characterized metrology targets, which are insensitive to process non-uniformities and variations, in combination with optimized recipes which take advantage of advanced DBO designs. In this work we examine the impact of different degrees of process non-uniformity and target damage on DBO measurement gratings and study their impact on overlay measurement accuracy and precision. Multiple wavelength and dual polarization scatterometry are used to characterize the DBO design performance over the range of process variation. In conclusion, we describe the robustness of DBO metrology to target damage and show how to exploit the measurement capability of a multiple wavelength, dual polarization scatterometry tool to ensure the required measurement accuracy for current and future technology nodes.

  3. Determination of Antimycin-A in water by liquid chromatographic/mass spectrometry: single-laboratory validation

    USGS Publications Warehouse

    Bernardy, Jeffry A.; Hubert, Terrance D.; Ogorek, Jacob M.; Schmidt, Larry J.

    2013-01-01

    An LC/MS method was developed and validated for the quantitative determination and confirmation of antimycin-A (ANT-A) in water from lakes or streams. Three different water sample volumes (25, 50, and 250 mL) were evaluated. ANT-A was stabilized in the field by immediately extracting it from water into anhydrous acetone using SPE. The stabilized concentrated samples were then transported to a laboratory and analyzed by LC/MS using negative electrospray ionization. The method was determined to have adequate accuracy (78 to 113% recovery), precision (0.77 to 7.5% RSD with samples ≥500 ng/L and 4.8 to 17% RSD with samples ≤100 ng/L), linearity, and robustness over an LOQ range from 8 to 51 600 ng/L.

  4. Simultaneous assay of multiple antibiotics in human plasma by LC-MS/MS: importance of optimizing formic acid concentration.

    PubMed

    Chen, Feng; Hu, Zhe-Yi; Laizure, S Casey; Hudson, Joanna Q

    2017-03-01

    Optimal dosing of antibiotics in critically ill patients is complicated by the development of resistant organisms requiring treatment with multiple antibiotics and alterations in systemic exposure due to diseases and extracorporeal drug removal. Developing guidelines for optimal antibiotic dosing is an important therapeutic goal requiring robust analytical methods to simultaneously measure multiple antibiotics. An LC-MS/MS assay using protein precipitation for cleanup followed by a 6-min gradient separation was developed to simultaneously determine five antibiotics in human plasma. The precision and accuracy were within the 15% acceptance range. The formic acid concentration was an important determinant of signal intensity, peak shape and matrix effects. The method was designed to be simple and successfully applied to a clinical pharmacokinetic study.

  5. Integrated and differential accuracy in resummed cross sections

    DOE PAGES

    Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.

    2017-03-30

    Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less

  6. Laser-Interferometric Broadband Seismometer for Epicenter Location Estimation

    PubMed Central

    Lee, Kyunghyun; Kwon, Hyungkwan; You, Kwanho

    2017-01-01

    In this paper, we suggest a seismic signal measurement system that uses a laser interferometer. The heterodyne laser interferometer is used as a seismometer due to its high accuracy and robustness. Seismic data measured by the laser interferometer is used to analyze crucial earthquake characteristics. To measure P-S time more precisely, the short time Fourier transform and instantaneous frequency estimation methods are applied to the intensity signal (Iy) of the laser interferometer. To estimate the epicenter location, the range difference of arrival algorithm is applied with the P-S time result. The linear matrix equation of the epicenter localization can be derived using P-S time data obtained from more than three observatories. We prove the performance of the proposed algorithm through simulation and experimental results. PMID:29065515

  7. Determination of antimycin-A in water by liquid chromatographic/mass spectrometry: single-laboratory validation.

    PubMed

    Bernardy, Jeffry A; Hubert, Terrance D; Ogorek, Jacob M; Schmidt, Larry J

    2013-01-01

    An LC/MS method was developed and validated for the quantitative determination and confirmation of antimycin-A (ANT-A) in water from lakes or streams. Three different water sample volumes (25, 50, and 250 mL) were evaluated. ANT-A was stabilized in the field by immediately extracting it from water into anhydrous acetone using SPE. The stabilized concentrated samples were then transported to a laboratory and analyzed by LC/MS using negative electrospray ionization. The method was determined to have adequate accuracy (78 to 113% recovery), precision (0.77 to 7.5% RSD with samples > or = 500 ng/L and 4.8 to 17% RSD with samples < or = 100 ng/L), linearity, and robustness over an LOQ range from 8 to 51 600 ng/L.

  8. Advances in compact proton spectrometers for inertial-confinement fusion and plasma nuclear science.

    PubMed

    Seguin, F H; Sinenian, N; Rosenberg, M; Zylstra, A; Manuel, M J-E; Sio, H; Waugh, C; Rinderknecht, H G; Johnson, M Gatu; Frenje, J; Li, C K; Petrasso, R; Sangster, T C; Roberts, S

    2012-10-01

    Compact wedge-range-filter proton spectrometers cover proton energies ∼3-20 MeV. They have been used at the OMEGA laser facility for more than a decade for measuring spectra of primary D(3)He protons in D(3)He implosions, secondary D(3)He protons in DD implosions, and ablator protons in DT implosions; they are now being used also at the National Ignition Facility. The spectra are used to determine proton yields, shell areal density at shock-bang time and compression-bang time, fuel areal density, and implosion symmetry. There have been changes in fabrication and in analysis algorithms, resulting in a wider energy range, better accuracy and precision, and better robustness for survivability with indirect-drive inertial-confinement-fusion experiments.

  9. Optimization and Validation of the TZM-bl Assay for Standardized Assessments of Neutralizing Antibodies Against HIV-1

    PubMed Central

    Sarzotti-Kelsoe, Marcella; Bailer, Robert T; Turk, Ellen; Lin, Chen-li; Bilska, Miroslawa; Greene, Kelli M.; Gao, Hongmei; Todd, Christopher A.; Ozaki, Daniel A.; Seaman, Michael S.; Mascola, John R.; Montefiori, David C.

    2014-01-01

    The TZM-bl assay measures antibody-mediated neutralization of HIV-1 as a function of reductions in HIV-1 Tat-regulated firefly luciferase (Luc) reporter gene expression after a single round of infection with Env-pseudotyped viruses. This assay has become the main endpoint neutralization assay used for the assessment of preclinical and clinical trial samples by a growing number of laboratories worldwide. Here we present the results of the formal optimization and validation of the TZM-bl assay, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. The assay was evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. The validated manual TZM-bl assay was also adapted, optimized and qualified to an automated 384-well format. PMID:24291345

  10. Spectroscopic characterization and quantitative determination of atorvastatin calcium impurities by novel HPLC method

    NASA Astrophysics Data System (ADS)

    Gupta, Lokesh Kumar

    2012-11-01

    Seven process related impurities were identified by LC-MS in the atorvastatin calcium drug substance. These impurities were identified by LC-MS. The structure of impurities was confirmed by modern spectroscopic techniques like 1H NMR and IR and physicochemical studies conducted by using synthesized authentic reference compounds. The synthesized reference samples of the impurity compounds were used for the quantitative HPLC determination. These impurities were detected by newly developed gradient, reverse phase high performance liquid chromatographic (HPLC) method. The system suitability of HPLC analysis established the validity of the separation. The analytical method was validated according to International Conference of Harmonization (ICH) with respect to specificity, precision, accuracy, linearity, robustness and stability of analytical solutions to demonstrate the power of newly developed HPLC method.

  11. Wlan-Based Indoor Localization Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Saleem, Fasiha; Wyne, Shurjeel

    2016-07-01

    Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.

  12. Hardware accuracy counters for application precision and quality feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be outputmore » to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.« less

  13. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation

    NASA Technical Reports Server (NTRS)

    Pollmeier, V. M.; Thurman, S. W.

    1992-01-01

    The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.

  15. Study on high-precision measurement of long radius of curvature

    NASA Astrophysics Data System (ADS)

    Wu, Dongcheng; Peng, Shijun; Gao, Songtao

    2016-09-01

    It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.

  16. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  17. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  18. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  19. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  20. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  1. Randomized Subspace Learning for Proline Cis-Trans Isomerization Prediction.

    PubMed

    Al-Jarrah, Omar Y; Yoo, Paul D; Taha, Kamal; Muhaidat, Sami; Shami, Abdallah; Zaki, Nazar

    2015-01-01

    Proline residues are common source of kinetic complications during folding. The X-Pro peptide bond is the only peptide bond for which the stability of the cis and trans conformations is comparable. The cis-trans isomerization (CTI) of X-Pro peptide bonds is a widely recognized rate-limiting factor, which can not only induces additional slow phases in protein folding but also modifies the millisecond and sub-millisecond dynamics of the protein. An accurate computational prediction of proline CTI is of great importance for the understanding of protein folding, splicing, cell signaling, and transmembrane active transport in both the human body and animals. In our earlier work, we successfully developed a biophysically motivated proline CTI predictor utilizing a novel tree-based consensus model with a powerful metalearning technique and achieved 86.58 percent Q2 accuracy and 0.74 Mcc, which is a better result than the results (70-73 percent Q2 accuracies) reported in the literature on the well-referenced benchmark dataset. In this paper, we describe experiments with novel randomized subspace learning and bootstrap seeding techniques as an extension to our earlier work, the consensus models as well as entropy-based learning methods, to obtain better accuracy through a precise and robust learning scheme for proline CTI prediction.

  2. Opto-mechanical system design of test system for near-infrared and visible target

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhu, Guodong; Wang, Yuchao

    2014-12-01

    Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.

  3. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  4. New Stability-Indicating RP-HPLC Method for Determination of Diclofenac Potassium and Metaxalone from their Combined Dosage Form

    PubMed Central

    Panda, Sagar Suman; Patanaik, Debasis; Ravi Kumar, Bera V. V.

    2012-01-01

    A simple, precise and accurate isocratic RP-HPLC stability-indicating assay method has been developed to determine diclofenac potassium and metaxalone in their combined dosage forms. Isocratic separation was achieved on a Hibar-C18, Lichrosphere-100® (250 mm × 4.6 mm i.d., particle size 5 μm) column at room temperature in isocratic mode, the mobile phase consists of methanol: water (80:20, v/v) at a flow rate of 1.0 ml/min, the injection volume was 20 μl and UV detection was carried out at 280nm. The drug was subjected to acid and alkali hydrolysis, oxidation, photolysis and heat as stress conditions. The method was validated for specificity, linearity, precision, accuracy, robustness and system suitability. The method was linear in the drug concentration range of 2.5–30 μg/ml and 20–240 μg/ml for diclofenac potassium and metaxalone, respectively. The precision (RSD) of six samples was 0.83 and 0.93% for repeatability, and the intermediate precision (RSD) among six-sample preparation was 1.63 and 0.49% for diclofenac potassium and metaxalone, respectively. The mean recoveries were between 100.99–102.58% and 99.97–100.01% for diclofenac potassium and metaxalone, respectively. The proposed method can be used successfully for routine analysis of the drug in bulk and combined pharmaceutical dosage forms. PMID:22396909

  5. Simultaneous Determination of Soyasaponins and Isoflavones in Soy (Glycine max L.) Products by HPTLC-densitometry-Multiple Detection.

    PubMed

    Shawky, Eman; Sallam, Shaimaa M

    2017-11-01

    A new high-throughput method was developed for the simultaneous analysis of isoflavones and soyasaponnins in Soy (Glycine max L.) products by high-performance thin-layer chromatography with densitometry and multiple detection. Silica gel was used as the stationary phase and ethyl acetate:methanol:water:acetic acid (100:20:16:1, v/v/v/v) as the mobile phase. After chromatographic development, multi-wavelength scanning was carried out by: (i) UV-absorbance measurement at 265 nm for genistin, daidzin and glycitin, (ii) Vis-absorbance measurement at 650 nm for Soyasaponins I and III, after post-chromatographic derivatization with anisaldehyde/sulfuric acid reagent. Validation of the developed method was found to meet the acceptance criteria delineated by ICH guidelines with respect to linearity, accuracy, precision, specificity and robustness. Calibrations were linear with correlation coefficients of >0.994. Intra-day precisions relative standard deviation (RSD)% of all substances in matrix were determined to be between 0.7 and 0.9%, while inter-day precisions (RSD%) ranged between 1.2 and 1.8%. The validated method was successfully applied for determination of the studied analytes in soy-based infant formula and soybean products. The new method compares favorably to other reported methods in being as accurate and precise and in the same time more feasible and cost-effective. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  7. Precision and accuracy of suggested maxillary and mandibular landmarks with cone-beam computed tomography for regional superimpositions: An in vitro study.

    PubMed

    Lemieux, Genevieve; Carey, Jason P; Flores-Mir, Carlos; Secanell, Marc; Hart, Adam; Lagravère, Manuel O

    2016-01-01

    Our objective was to identify and evaluate the accuracy and precision (intrarater and interrater reliabilities) of various anatomic landmarks for use in 3-dimensional maxillary and mandibular regional superimpositions. We used cone-beam computed tomography reconstructions of 10 human dried skulls to locate 10 landmarks in the maxilla and the mandible. Precision and accuracy were assessed with intrarater and interrater readings. Three examiners located these landmarks in the cone-beam computed tomography images 3 times with readings scheduled at 1-week intervals. Three-dimensional coordinates were determined (x, y, and z coordinates), and the intraclass correlation coefficient was computed to determine intrarater and interrater reliabilities, as well as the mean error difference and confidence intervals for each measurement. Bilateral mental foramina, bilateral infraorbital foramina, anterior nasal spine, incisive canal, and nasion showed the highest precision and accuracy in both intrarater and interrater reliabilities. Subspinale and bilateral lingulae had the lowest precision and accuracy in both intrarater and interrater reliabilities. When choosing the most accurate and precise landmarks for 3-dimensional cephalometric analysis or plane-derived maxillary and mandibular superimpositions, bilateral mental and infraorbital foramina, landmarks in the anterior region of the maxilla, and nasion appeared to be the best options of the analyzed landmarks. Caution is needed when using subspinale and bilateral lingulae because of their higher mean errors in location. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  8. Comparing conventional and computer-assisted surgery baseplate and screw placement in reverse shoulder arthroplasty.

    PubMed

    Venne, Gabriel; Rasquinha, Brian J; Pichora, David; Ellis, Randy E; Bicknell, Ryan

    2015-07-01

    Preoperative planning and intraoperative navigation technologies have each been shown separately to be beneficial for optimizing screw and baseplate positioning in reverse shoulder arthroplasty (RSA) but to date have not been combined. This study describes development of a system for performing computer-assisted RSA glenoid baseplate and screw placement, including preoperative planning, intraoperative navigation, and postoperative evaluation, and compares this system with a conventional approach. We used a custom-designed system allowing computed tomography (CT)-based preoperative planning, intraoperative navigation, and postoperative evaluation. Five orthopedic surgeons defined common preoperative plans on 3-dimensional CT reconstructed cadaveric shoulders. Each surgeon performed 3 computer-assisted and 3 conventional simulated procedures. The 3-dimensional CT reconstructed postoperative units were digitally matched to the preoperative model for evaluation of entry points, end points, and angulations of screws and baseplate. Values were used to find accuracy and precision of the 2 groups with respect to the defined placement. Statistical analysis was performed by t tests (α = .05). Comparison of the groups revealed no difference in accuracy or precision of screws or baseplate entry points (P > .05). Accuracy and precision were improved with use of navigation for end points and angulations of 3 screws (P < .05). Accuracy of the inferior screw showed a trend of improvement with navigation (P > .05). Navigated baseplate end point precision was improved (P < .05), with a trend toward improved accuracy (P > .05). We conclude that CT-based preoperative planning and intraoperative navigation allow improved accuracy and precision for screw placement and precision for baseplate positioning with respect to a predefined placement compared with conventional techniques in RSA. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  9. Analytical Method Development and Validation for the Simultaneous Estimation of Abacavir and Lamivudine by Reversed-phase High-performance Liquid Chromatography in Bulk and Tablet Dosage Forms.

    PubMed

    Raees Ahmad, Sufiyan Ahmad; Patil, Lalit; Mohammed Usman, Mohammed Rageeb; Imran, Mohammad; Akhtar, Rashid

    2018-01-01

    A simple rapid, accurate, precise, and reproducible validated reverse phase high performance liquid chromatography (HPLC) method was developed for the determination of Abacavir (ABAC) and Lamivudine (LAMI) in bulk and tablet dosage forms. The quantification was carried out using Symmetry Premsil C18 (250 mm × 4.6 mm, 5 μm) column run in isocratic way using mobile phase comprising methanol: water (0.05% orthophosphoric acid with pH 3) 83:17 v/v and a detection wavelength of 245 nm and injection volume of 20 μl, with a flow rate of 1 ml/min. In the developed method, the retention times of ABAC and LAMI were found to be 3.5 min and 7.4 min, respectively. The method was validated in terms of linearity, precision, accuracy, limits of detection, limits of quantitation, and robustness in accordance with the International Conference on Harmonization guidelines. The assay of the proposed method was found to be 99% - 101%. The recovery studies were also carried out and mean % recovery was found to be 99% - 101%. The % relative standard deviation from reproducibility was found to be <2%. The proposed method was statistically evaluated and can be applied for routine quality control analysis of ABAC and LAMI in bulk and in tablet dosage form. Attempts were made to develop RP-HPLC method for simultaneous estimation of Abacavir and Lamivudine for the RP-HPLC method. The developed method was validated according to the ICH guidelines. The linearity, precision, range, robustness were within the limits as specified by the ICH guidelines. Hence the method was found to be simple, accurate, precise, economic and reproducible. So the proposed methods can be used for the routine quality control analysis of Abacavir and Lamivudine in bulk drug as well as in formulations. Abbreviations Used: HPLC: High-performance liquid chromatography, UV: Ultraviolet, ICH: International Conference on Harmonization, ABAC: Abacavir, LAMI: Lamivudine, HIV: Human immunodeficiency virus, AIDS: Acquired immunodeficiency syndrome, NRTI: Nucleoside reverse transcriptase inhibitors, ARV: Antiretroviral, RSD: Relative standard deviation, RT: Retention time, SD: Standard deviation.

  10. An Approach for High-precision Stand-alone Positioning in a Dynamic Environment

    NASA Astrophysics Data System (ADS)

    Halis Saka, M.; Metin Alkan, Reha; Ozpercin, Alişir

    2015-04-01

    In this study, an algorithm is developed for precise positioning in dynamic environment utilizing a single geodetic GNSS receiver using carrier phase data. In this method, users should start the measurement on a known point near the project area for a couple of seconds making use of a single dual-frequency geodetic-grade receiver. The technique employs iono-free carrier phase observations with precise products. The equation of the algorithm is given below; Sm(t(i+1))=SC(ti)+[ΦIF (t(i+1) )-ΦIF (ti)] where, Sm(t(i+1)) is the phase-range between satellites and the receiver, SC(ti) is the initial range computed from the initial known point coordinates and the satellite coordinates and ΦIF is the ionosphere-free phase measurement (in meters). Tropospheric path delays are modelled using the standard tropospheric model. To accomplish the process, an in-house program was coded and some functions were adopted from Easy-Suite available at http://kom.aau.dk/~borre/easy. In order to assess the performance of the introduced algorithm in a dynamic environment, a dataset from a kinematic test measurement was used. The data were collected from a kinematic test measurement in Istanbul, Turkey. In the test measurement, a geodetic dual-frequency GNSS receiver, Ashtech Z-Xtreme, was set up on a known point on the shore and a couple of epochs were recorded for initialization. The receiver was then moved to a vessel and data were collected for approximately 2.5 hours and the measurement was finalized on a known point on the shore. While the kinematic measurement on the vessel were carried out, another GNSS receiver was set up on a geodetic point with known coordinates on the shore and data were collected in static mode to calculate the reference trajectory of the vessel using differential technique. The coordinates of the vessel were calculated for each measurement epoch with the introduced method. With the purpose of obtaining more robust results, all coordinates were calculated once again by inversely, i.e. from the last epoch to the first one. In this way, the estimated coordinates were also controlled. The average of both computed coordinates were used as vessel coordinates and then compared with the known-coordinates those of geodetic receiver epoch by epoch. The results indicate that the calculated coordinates from the introduced method are consistent with the reference trajectory with an accuracy of about 1 decimeter. In contrast, the findings imply lower accuracy for height components with an accuracy of about 2 decimeters. This accuracy level meets the requirement of many applications including some marine applications, precise hydrographic surveying, dredging, attitude control of ships, buoys and floating platforms, marine geodesy, navigation and oceanography.

  11. Accuracy and Measurement Error of the Medial Clear Space of the Ankle.

    PubMed

    Metitiri, Ogheneochuko; Ghorbanhoseini, Mohammad; Zurakowski, David; Hochman, Mary G; Nazarian, Ara; Kwon, John Y

    2017-04-01

    Measurement of the medial clear space (MCS) is commonly used to assess deltoid ligament competency and mortise stability when managing ankle fractures. Lacking knowledge of the true anatomic width measured, previous studies have been unable to measure accuracy of measurement. The purpose of this study was to determine MCS measurement error and accuracy and any influencing factors. Using 3 normal transtibial ankle cadaver specimens, deltoid and syndesmotic ligaments were transected and the mortise widened and affixed at a width of 6 mm (specimen 1) and 4 mm (specimen 2). The mortise was left intact in specimen 3. Radiographs were obtained of each cadaver at varying degrees of rotation. Radiographs were randomized, and providers measured the MCS using a standardized technique. Lack of accuracy as well as lack of precision in measurement of the medial clear space compared to a known anatomic value was present for all 3 specimens tested. There were no significant differences in mean delta with regard to level of training for specimens 1 and 2; however, with specimen 3, staff physicians showed increased measurement accuracy compared with trainees. Accuracy and precision of MCS measurements are poor. Provider experience did not appear to influence accuracy and precision of measurements for the displaced mortise. This high degree of measurement error and lack of precision should be considered when deciding treatment options based on MCS measurements.

  12. Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns

    PubMed Central

    Teng, Dongdong; Chen, Dihu; Tan, Hongzhou

    2015-01-01

    The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929

  13. Variation of Static-PPP Positioning Accuracy Using GPS-Single Frequency Observations (Aswan, Egypt)

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf

    2017-06-01

    Precise Point Positioning (PPP) is a technique used for position computation with a high accuracy using only one GNSS receiver. It depends on highly accurate satellite position and clock data rather than broadcast ephemeries. PPP precision varies based on positioning technique (static or kinematic), observations type (single or dual frequency) and the duration of collected observations. PPP-(dual frequency receivers) offers comparable accuracy to differential GPS. PPP-single frequency receivers has many applications such as infrastructure, hydrography and precision agriculture. PPP using low cost GPS single-frequency receivers is an area of great interest for millions of users in developing countries such as Egypt. This research presents a study for the variability of single frequency static GPS-PPP precision based on different observation durations.

  14. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  15. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  16. HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.

    PubMed

    Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R

    2007-08-01

    We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic "worst case analysis". These couplings are among the smallest currently measured, and their determination in both isotropic and anisotropic media demands the highest measurement precision. The new approach yielded excellent quantitative agreement with values determined independently by the conventional 3D quantitative J NMR method (in cases where sample stability in oriented media permitted these measurements) but with a factor of 2-5 in time savings. The statistical measure of reliability, measuring the quality of each RDC value, offers valuable adjunct information even in cases where modest time savings may be realized.

  17. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  18. Evaluation and attribution of OCO-2 XCO2 uncertainties

    NASA Astrophysics Data System (ADS)

    Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin

    2017-07-01

    Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.

  19. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    PubMed

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Stability-indicating LC assay for butenafine hydrochloride in creams using an experimental design for robustness evaluation and photodegradation kinetics study.

    PubMed

    Barth, Aline Bergesch; de Oliveira, Gabriela Bolfe; Malesuik, Marcelo Donadel; Paim, Clésio Soldatelli; Volpato, Nadia Maria

    2011-08-01

    A stability-indicating liquid chromatography method for the determination of the antifungal agent butenafine hydrochloride (BTF) in a cream was developed and validated using the Plackett-Burman experimental design for robustness evaluation. Also, the drug photodegradation kinetics was determined. The analytical column was operated with acetonitrile, methanol and a solution of triethylamine 0.3% adjusted to pH 4.0 (6:3:1) at a flow rate of 1 mL/min and detection at 283 nm. BTF extraction from the cream was done with n-butyl alcohol and methanol in ultrasonic bath. The performed degradation conditions were: acid and basic media with HCl 1M and NaOH 1M, respectively, oxidation with H(2)O(2) 10%, and the exposure to UV-C light. No interference in the BTF elution was verified. Linearity was assessed (r(2) = 0.9999) and ANOVA showed non-significative linearity deviation (p > 0.05). Adequate results were obtained for repeatability, intra-day precision, and accuracy. Critical factors were selected to examine the method robustness with the two-level Plackett-Burman experimental design and no significant factors were detected (p > 0.05). The BTF photodegradation kinetics was determined for the standard and for the cream, both in methanolic solution, under UV light at 254 nm. The degradation process can be described by first-order kinetics in both cases.

  1. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  2. Comprehensive validation scheme for in situ fiber optics dissolution method for pharmaceutical drug product testing.

    PubMed

    Mirza, Tahseen; Liu, Qian Julie; Vivilecchia, Richard; Joshi, Yatindra

    2009-03-01

    There has been a growing interest during the past decade in the use of fiber optics dissolution testing. Use of this novel technology is mainly confined to research and development laboratories. It has not yet emerged as a tool for end product release testing despite its ability to generate in situ results and efficiency improvement. One potential reason may be the lack of clear validation guidelines that can be applied for the assessment of suitability of fiber optics. This article describes a comprehensive validation scheme and development of a reliable, robust, reproducible and cost-effective dissolution test using fiber optics technology. The test was successfully applied for characterizing the dissolution behavior of a 40-mg immediate-release tablet dosage form that is under development at Novartis Pharmaceuticals, East Hanover, New Jersey. The method was validated for the following parameters: linearity, precision, accuracy, specificity, and robustness. In particular, robustness was evaluated in terms of probe sampling depth and probe orientation. The in situ fiber optic method was found to be comparable to the existing manual sampling dissolution method. Finally, the fiber optic dissolution test was successfully performed by different operators on different days, to further enhance the validity of the method. The results demonstrate that the fiber optics technology can be successfully validated for end product dissolution/release testing. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association

  3. A robust motion estimation system for minimal invasive laparoscopy

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; von Öhsen, Udo; Grigat, Rolf-Rainer

    2012-02-01

    Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view, a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional external tracking system the structure can be recovered from feature correspondences between different frames. In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope with specular reflection, inhomogeneous illumination and blurred frames. To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to compute the motion of the endoscope from multi-frame correspondence. The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved by a robotic system and the ground truth motion is recorded. The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the proposed pose estimation system.

  4. Extraction optimization and UHPLC method development for determination of the 20-hydroxyecdysone in Sida tuberculata leaves.

    PubMed

    da Rosa, Hemerson S; Koetz, Mariana; Santos, Marí Castro; Jandrey, Elisa Helena Farias; Folmer, Vanderlei; Henriques, Amélia Teresinha; Mendez, Andreas Sebastian Loureiro

    2018-04-01

    Sida tuberculata (ST) is a Malvaceae species widely distributed in Southern Brazil. In traditional medicine, ST has been employed as hypoglycemic, hypocholesterolemic, anti-inflammatory and antimicrobial. Additionally, this species is chemically characterized by flavonoids, alkaloids and phytoecdysteroids mainly. The present work aimed to optimize the extractive technique and to validate an UHPLC method for the determination of 20-hydroxyecdsone (20HE) in the ST leaves. Box-Behnken Design (BBD) was used in method optimization. The extractive methods tested were: static and dynamic maceration, ultrasound, ultra-turrax and reflux. In the Box-Behnken three parameters were evaluated in three levels (-1, 0, +1), particle size, time and plant:solvent ratio. In validation method, the parameters of selectivity, specificity, linearity, limits of detection and quantification (LOD, LOQ), precision, accuracy and robustness were evaluated. The results indicate static maceration as better technique to obtain 20HE peak area in ST extract. The optimal extraction from surface response methodology was achieved with the parameters granulometry of 710 nm, 9 days of maceration and plant:solvent ratio 1:54 (w/v). The UHPLC-PDA analytical developed method showed full viability of performance, proving to be selective, linear, precise, accurate and robust for 20HE detection in ST leaves. The average content of 20HE was 0.56% per dry extract. Thus, the optimization of extractive method in ST leaves increased the concentration of 20HE in crude extract, and a reliable method was successfully developed according to validation requirements and in agreement with current legislation. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Number-Density Measurements of CO2 in Real Time with an Optical Frequency Comb for High Accuracy and Precision

    NASA Astrophysics Data System (ADS)

    Scholten, Sarah K.; Perrella, Christopher; Anstie, James D.; White, Richard T.; Al-Ashwal, Waddah; Hébert, Nicolas Bourbeau; Genest, Jérôme; Luiten, Andre N.

    2018-05-01

    Real-time and accurate measurements of gas properties are highly desirable for numerous real-world applications. Here, we use an optical-frequency comb to demonstrate absolute number-density and temperature measurements of a sample gas with state-of-the-art precision and accuracy. The technique is demonstrated by measuring the number density of 12C16O2 with an accuracy of better than 1% and a precision of 0.04% in a measurement and analysis cycle of less than 1 s. This technique is transferable to numerous molecular species, thus offering an avenue for near-universal gas concentration measurements.

  6. Complexity in estimation of esomeprazole and its related impurities' stability in various stress conditions in low-dose aspirin and esomeprazole magnesium capsules.

    PubMed

    Reddy, Palavai Sripal; Hotha, Kishore Kumar; Sait, Shakil

    2013-01-01

    A complex, sensitive, and precise high-performance liquid chromatographic method for the profiling of impurities of esomeprazole in low-dose aspirin and esomeprazole capsules has been developed, validated, and used for the determination of impurities in pharmaceutical products. Esomeprazole and its related impurities' development in the presence of aspirin was traditionally difficult due to aspirin's sensitivity to basic conditions and esomeprazole's sensitivity to acidic conditions. When aspirin is under basic, humid, and extreme temperature conditions, it produces salicylic acid and acetic acid moieties. These two byproducts create an acidic environment for the esomeprazole. Due to the volatility and migration phenomenon of the produced acetic acid and salicylic acid from aspirin in the capsule dosage form, esomeprazole's purity, stability, and quantification are affected. The objective of the present research work was to develop a gradient reversed-phase liquid chromatographic method to separate all the degradation products and process-related impurities from the main peak. The impurities were well-separated on a RP8 column (150 mm × 4.6mm, X-terra, RP8, 3.5μm) by the gradient program using a glycine buffer (0.08 M, pH adjusted to 9.0 with 50% NaOH), acetonitrile, and methanol at a flow rate of 1.0 mL min(-1) with detection wavelength at 305 nm and column temperature at 30°C. The developed method was found to be specific, precise, linear, accurate, rugged, and robust. LOQ values for all of the known impurities were below reporting thresholds. The drug was subjected to stress conditions of hydrolysis, oxidation, photolysis, and thermal degradation in the presence of aspirin. The developed RP-HPLC method was validated according to the present ICH guidelines for specificity, linearity, accuracy, precision, limit of detection, limit of quantification, ruggedness, and robustness.

  7. Development of a Stability-Indicating Stereoselective Method for Quantification of the Enantiomer in the Drug Substance and Pharmaceutical Dosage Form of Rosuvastatin Calcium by an Enhanced Approach

    PubMed Central

    Rajendra Reddy, Gangireddy; Ravindra Reddy, Papammagari; Siva Jyothi, Polisetty

    2015-01-01

    A novel, simple, precise, and stability-indicating stereoselective method was developed and validated for the accurate quantification of the enantiomer in the drug substance and pharmaceutical dosage forms of Rosuvastatin Calcium. The method is capable of quantifying the enantiomer in the presence of other related substances. The chromatographic separation was achieved with an immobilized cellulose stationary phase (Chiralpak IB) 250 mm x 4.6 mm x 5.0 μm particle size column with a mobile phase containing a mixture of n-hexane, dichloromethane, 2-propanol, and trifluoroacetic acid in the ratio 82:10:8:0.2 (v/v/v/v). The eluted compounds were monitored at 243 nm and the run time was 18 min. Multivariate analysis and statistical tools were used to develop this highly robust method in a short span of time. The stability-indicating power of the method was established by subjecting Rosuvastatin Calcium to the stress conditions (forced degradation) of acid, base, oxidative, thermal, humidity, and photolytic degradation. Major degradation products were identified and found to be well-resolved from the enantiomer peak, proving the stability-indicating power of the method. The developed method was validated as per International Conference on Harmonization (ICH) guidelines with respect to specificity, limit of detection and limit of quantification, precision, linearity, accuracy, and robustness. The method exhibited consistent, high-quality recoveries (100 ± 10%) with a high precision for the enantiomer. Linear regression analysis revealed an excellent correlation between the peak responses and concentrations (r2 value of 0.9977) for the enantiomer. The method is sensitive enough to quantify the enantiomer above 0.04% and detect the enantiomer above 0.015% in Rosuvastatin Calcium. The stability tests were also performed on the drug substances as per ICH norms. PMID:26839815

  8. Accuracy and precision of occlusal contacts of stereolithographic casts mounted by digital interocclusal registrations.

    PubMed

    Krahenbuhl, Jason T; Cho, Seok-Hwan; Irelan, Jon; Bansal, Naveen K

    2016-08-01

    Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (P<.05). For the AC in all dentoforms, a significant increase was found in the areas of actual contact of SLA casts compared with the contacts present in the SP (P<.05). Conversely, for the NC in all dentoforms, a significant decrease was found in the occlusal contact areas of the SLA casts compared with the contacts in the SP (P<.05). The precision analysis demonstrated the different CoV values between AC (5.8 to 8.8%) and NC (21.4 to 44.6%) of digitally mounted SLA casts, indicating that the overall precision of the SLA cast was low. For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast's inability to reproduce the uniform occlusal contacts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  9. IEEE-1588(Trademark) Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems

    DTIC Science & Technology

    2002-12-01

    34th Annual Precise Time and Time Interval (PTTI) Meeting 243 IEEE-1588™ STANDARD FOR A PRECISION CLOCK SYNCHRONIZATION PROTOCOL FOR... synchronization . 2. Cyclic-systems. In cyclic-systems, timing is periodic and is usually defined by the characteristics of a cyclic network or bus...incommensurate, timing schedules for each device are easily implemented. In addition, synchronization accuracy depends on the accuracy of the common

  10. Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  11. Analysis and improvements of Adaptive Particle Refinement (APR) through CPU time, accuracy and robustness considerations

    NASA Astrophysics Data System (ADS)

    Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.

    2018-02-01

    While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.

  12. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    NASA Astrophysics Data System (ADS)

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weston, Brian T.

    This dissertation focuses on the development of a fully-implicit, high-order compressible ow solver with phase change. The work is motivated by laser-induced phase change applications, particularly by the need to develop large-scale multi-physics simulations of the selective laser melting (SLM) process in metal additive manufacturing (3D printing). Simulations of the SLM process require precise tracking of multi-material solid-liquid-gas interfaces, due to laser-induced melting/ solidi cation and evaporation/condensation of metal powder in an ambient gas. These rapid density variations and phase change processes tightly couple the governing equations, requiring a fully compressible framework to robustly capture the rapid density variations ofmore » the ambient gas and the melting/evaporation of the metal powder. For non-isothermal phase change, the velocity is gradually suppressed through the mushy region by a variable viscosity and Darcy source term model. The governing equations are discretized up to 4th-order accuracy with our reconstructed Discontinuous Galerkin spatial discretization scheme and up to 5th-order accuracy with L-stable fully implicit time discretization schemes (BDF2 and ESDIRK3-5). The resulting set of non-linear equations is solved using a robust Newton-Krylov method, with the Jacobian-free version of the GMRES solver for linear iterations. Due to the sti nes associated with the acoustic waves and thermal and viscous/material strength e ects, preconditioning the GMRES solver is essential. A robust and scalable approximate block factorization preconditioner was developed, which utilizes the velocity-pressure (vP) and velocity-temperature (vT) Schur complement systems. This multigrid block reduction preconditioning technique converges for high CFL/Fourier numbers and exhibits excellent parallel and algorithmic scalability on classic benchmark problems in uid dynamics (lid-driven cavity ow and natural convection heat transfer) as well as for laser-induced phase change problems in 2D and 3D.« less

  14. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  15. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume

    2014-05-01

    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.

  16. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  17. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems.

    PubMed

    Lyu, Weiwei; Cheng, Xianghong

    2017-11-28

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.

  18. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  19. Facile quantitation of free thiols in a recombinant monoclonal antibody by reversed-phase high performance liquid chromatography with hydrophobicity-tailored thiol derivatization.

    PubMed

    Welch, Leslie; Dong, Xiao; Hewitt, Daniel; Irwin, Michelle; McCarty, Luke; Tsai, Christina; Baginski, Tomasz

    2018-06-02

    Free thiol content, and its consistency, is one of the product quality attributes of interest during technical development of manufactured recombinant monoclonal antibodies (mAbs). We describe a new, mid/high-throughput reversed-phase-high performance liquid chromatography (RP-HPLC) method coupled with derivatization of free thiols, for the determination of total free thiol content in an E. coli-expressed therapeutic monovalent monoclonal antibody mAb1. Initial selection of the derivatization reagent used an hydrophobicity-tailored approach. Maleimide-based thiol-reactive reagents with varying degrees of hydrophobicity were assessed to identify and select one that provided adequate chromatographic resolution and robust quantitation of free thiol-containing mAb1 forms. The method relies on covalent derivatization of free thiols in denatured mAb1 with N-tert-butylmaleimide (NtBM) label, followed by RP-HPLC separation with UV-based quantitation of native (disulfide containing) and labeled (free thiol containing) forms. The method demonstrated good specificity, precision, linearity, accuracy and robustness. Accuracy of the method, for samples with a wide range of free thiol content, was demonstrated using admixtures as well as by comparison to an orthogonal LC-MS peptide mapping method with isotope tagging of free thiols. The developed method has a facile workflow which fits well into both R&D characterization and quality control (QC) testing environments. The hydrophobicity-tailored approach to the selection of free thiol derivatization reagent is easily applied to the rapid development of free thiol quantitation methods for full-length recombinant antibodies. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Atmosphere Mitigation in Precise Point Positioning Ambiguity Resolution for Earthquake Early Warning in the Western U.S.

    NASA Astrophysics Data System (ADS)

    Geng, J.; Bock, Y.; Reuveni, Y.

    2014-12-01

    Earthquake early warning (EEW) is a time-critical system and typically relies on seismic instruments in the area around the source to detect P waves (or S waves) and rapidly issue alerts. Thanks to the rapid development of real-time Global Navigation Satellite Systems (GNSS), a good number of sensors have been deployed in seismic zones, such as the western U.S. where over 600 GPS stations are collecting 1-Hz high-rate data along the Cascadia subduction zone, San Francisco Bay area, San Andreas fault, etc. GNSS sensors complement the seismic sensors by recording the static offsets while seismic data provide highly-precise higher frequency motions. An optimal combination of GNSS and accelerometer data (seismogeodesy) has advantages compared to GNSS-only or seismic-only methods and provides seismic velocity and displacement waveforms that are precise enough to detect P wave arrivals, in particular in the near source region. Robust real-time GNSS and seismogeodetic analysis is challenging because it requires a period of initialization and continuous phase ambiguity resolution. One of the limiting factors is unmodeled atmospheric effects, both of tropospheric and ionospheric origin. One mitigation approach is to introduce atmospheric corrections into precise point positioning with ambiguity resolution (PPP-AR) of clients/stations within the monitored regions. NOAA generates hourly predictions of zenith troposphere delays at an accuracy of a few centimeters, and 15-minute slant ionospheric delays of a few TECU (Total Electron Content Unit) accuracy from both geodetic and meteorological data collected at hundreds of stations across the U.S. The Scripps Orbit and Permanent Array Center (SOPAC) is experimenting with a regional ionosphere grid using a few hundred stations in southern California, and the International GNSS Service (IGS) routinely estimates a Global Ionosphere Map using over 100 GNSS stations. With these troposphere and ionosphere data as additional observations, we can shorten the initialization period and improve the ambiguity resolution efficiency of PPP-AR. We demonstrate this with data collected by a cluster of Real-Time Earthquake Analysis for Disaster mItigation (READI) network stations in southern California operated by UNAVCO/PBO and SOPAC.

  1. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System.

    PubMed

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-03-18

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter.

  2. Basic investigation of dual-energy x-ray absorptiometry for bone densitometry using computed radiography

    NASA Astrophysics Data System (ADS)

    Shimura, Kazuo; Nakajima, Nobuyoshi; Tanaka, Hiroshi; Ishida, Masamitsu; Kato, Hisatoyo

    1993-09-01

    Dual-energy X-ray absorptiometry (DXA) is one of the bone densitometry techniques to diagnose osteoporosis, and has been gradually getting popular due to its high degree of precision. However, DXA involves a time-consuming examination because of its pencil-beam scan, and the equipment is expensive. In this study, we examined a new bone densitometry technique (CR-DXA) utilizing an X-ray imaging system and Computed Radiography (CR) used for medical X-ray image diagnosis. High level of measurement precision and accuracy could be achieved by X-ray rube voltage/filter optimization and various nonuniformity corrections based on simulation and experiment. The phantom study using a bone mineral block showed precision of 0.83% c.v. (coefficient of variation), and accuracy of 0.01 g/cm2, suggesting that a practically equivalent degree of measurement precision and accuracy to that of the DXA approach is achieved. CR-DXA is considered to provide bone mineral densitometry to facilitate simple, quick and precise bone mineral density measurement.

  3. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    PubMed Central

    Yang, Bingwei; Xie, Xinhao; Li, Duan

    2018-01-01

    Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639

  4. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  5. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  6. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  7. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-09-11

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.

  8. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  9. Passive versus active hazard detection and avoidance systems

    NASA Astrophysics Data System (ADS)

    Neveu, D.; Mercier, G.; Hamel, J.-F.; Simard Bilodeau, V.; Woicke, S.; Alger, M.; Beaudette, D.

    2015-06-01

    Upcoming planetary exploration missions will require advanced guidance, navigation and control technologies to reach landing sites with high precision and safety. Various technologies are currently in development to meet that goal. Some technologies rely on passive sensors and benefit from the low mass and power of such solutions while others rely on active sensors and benefit from an improved robustness and accuracy. This paper presents two different hazard detection and avoidance (HDA) system design approaches. The first architecture relies only on a camera as the passive HDA sensor while the second relies, in addition, on a Lidar as the active HDA sensor. Both options use in common an innovative hazard map fusion algorithm aiming at identifying the safest landing locations. This paper presents the simulation tools and reports the closed-loop software simulation results obtained using each design option. The paper also reports the Monte Carlo simulation campaign that was used to assess the robustness of each design option. The performance of each design option is compared against each other in terms of performance criteria such as percentage of success, mean distance to nearest hazard, etc. The applicability of each design option to planetary exploration missions is also discussed.

  10. Optimization and qualification of an Fc Array assay for assessments of antibodies against HIV-1/SIV.

    PubMed

    Brown, Eric P; Weiner, Joshua A; Lin, Shu; Natarajan, Harini; Normandin, Erica; Barouch, Dan H; Alter, Galit; Sarzotti-Kelsoe, Marcella; Ackerman, Margaret E

    2018-04-01

    The Fc Array is a multiplexed assay that assesses the Fc domain characteristics of antigen-specific antibodies with the potential to evaluate up to 500 antigen specificities simultaneously. Antigen-specific antibodies are captured on antigen-conjugated beads and their functional capacity is probed via an array of Fc-binding proteins including antibody subclassing reagents, Fcγ receptors, complement proteins, and lectins. Here we present the results of the optimization and formal qualification of the Fc Array, performed in compliance with Good Clinical Laboratory Practice (GCLP) guidelines. Assay conditions were optimized for performance and reproducibility, and the final version of the assay was then evaluated for specificity, accuracy, precision, limits of detection and quantitation, linearity, range and robustness. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Analytical validation of a psychiatric pharmacogenomic test.

    PubMed

    Jablonski, Michael R; King, Nina; Wang, Yongbao; Winner, Joel G; Watterson, Lucas R; Gunselman, Sandra; Dechairo, Bryan M

    2018-05-01

    The aim of this study was to validate the analytical performance of a combinatorial pharmacogenomics test designed to aid in the appropriate medication selection for neuropsychiatric conditions. Genomic DNA was isolated from buccal swabs. Twelve genes (65 variants/alleles) associated with psychotropic medication metabolism, side effects, and mechanisms of actions were evaluated by bead array, MALDI-TOF mass spectrometry, and/or capillary electrophoresis methods (GeneSight Psychotropic, Assurex Health, Inc.). The combinatorial pharmacogenomics test has a dynamic range of 2.5-20 ng/μl of input genomic DNA, with comparable performance for all assays included in the test. Both the precision and accuracy of the test were >99.9%, with individual gene components between 99.4 and 100%. This study demonstrates that the combinatorial pharmacogenomics test is robust and reproducible, making it suitable for clinical use.

  12. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  13. A Robust Static Headspace GC-FID Method to Detect and Quantify Formaldehyde Impurity in Pharmaceutical Excipients

    PubMed Central

    Al-Khayat, Mohammad Ammar; Karabet, Francois; Al-Mardini, Mohammad Amer

    2018-01-01

    Formaldehyde is a highly reactive impurity that can be found in many pharmaceutical excipients. Trace levels of this impurity may affect drug product stability, safety, efficacy, and performance. A static headspace gas chromatographic method was developed and validated to determine formaldehyde in pharmaceutical excipients after an effective derivatization procedure using acidified ethanol. Diethoxymethane, the derivative of formaldehyde, was then directly analyzed by GC-FID. Despite the simplicity of the developed method, however, it is characterized by its specificity, accuracy, and precision. The limits of detection and quantification of formaldehyde in the samples were of 2.44 and 8.12 µg/g, respectively. This method is characterized by using simple and economic GC-FID technique instead of MS detection, and it is successfully used to analyze formaldehyde in commonly used pharmaceutical excipients. PMID:29686930

  14. Quantitative analysis of factor P (Properdin) in monkey serum using immunoaffinity capturing in combination with LC-MS/MS.

    PubMed

    Gao, Xinliu; Lin, Hui; Krantz, Carsten; Garnier, Arlette; Flarakos, Jimmy; Tse, Francis L S; Li, Wenkui

    2016-01-01

    Factor P (Properdin), an endogenous glycoprotein, plays a key role in innate immune defense. Its quantification is important for understanding the pharmacodynamics (PD) of drug candidate(s). In the present work, an immunoaffinity capturing LC-MS/MS method has been developed and validated for the first time for the quantification of factor P in monkey serum with a dynamic range of 125 to 25,000 ng/ml using the calibration standards and QCs prepared in factor P depleted monkey serum. The intra- and inter-run precision was ≤7.2% (CV) and accuracy within ±16.8% (%Bias) across all QC levels evaluated. Results of other evaluations (e.g., stability) all met the acceptance criteria. The validated method was robust and implemented in support of a preclinical PK/PD study.

  15. Sensitivity analysis of gene ranking methods in phenotype prediction.

    PubMed

    deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan L; Sonis, Stephen T

    2016-12-01

    It has become clear that noise generated during the assay and analytical processes has the ability to disrupt accurate interpretation of genomic studies. Not only does such noise impact the scientific validity and costs of studies, but when assessed in the context of clinically translatable indications such as phenotype prediction, it can lead to inaccurate conclusions that could ultimately impact patients. We applied a sequence of ranking methods to damp noise associated with microarray outputs, and then tested the utility of the approach in three disease indications using publically available datasets. This study was performed in three phases. We first theoretically analyzed the effect of noise in phenotype prediction problems showing that it can be expressed as a modeling error that partially falsifies the pathways. Secondly, via synthetic modeling, we performed the sensitivity analysis for the main gene ranking methods to different types of noise. Finally, we studied the predictive accuracy of the gene lists provided by these ranking methods in synthetic data and in three different datasets related to cancer, rare and neurodegenerative diseases to better understand the translational aspects of our findings. In the case of synthetic modeling, we showed that Fisher's Ratio (FR) was the most robust gene ranking method in terms of precision for all the types of noise at different levels. Significance Analysis of Microarrays (SAM) provided slightly lower performance and the rest of the methods (fold change, entropy and maximum percentile distance) were much less precise and accurate. The predictive accuracy of the smallest set of high discriminatory probes was similar for all the methods in the case of Gaussian and Log-Gaussian noise. In the case of class assignment noise, the predictive accuracy of SAM and FR is higher. Finally, for real datasets (Chronic Lymphocytic Leukemia, Inclusion Body Myositis and Amyotrophic Lateral Sclerosis) we found that FR and SAM provided the highest predictive accuracies with the smallest number of genes. Biological pathways were found with an expanded list of genes whose discriminatory power has been established via FR. We have shown that noise in expression data and class assignment partially falsifies the sets of discriminatory probes in phenotype prediction problems. FR and SAM better exploit the principle of parsimony and are able to find subsets with less number of high discriminatory genes. The predictive accuracy and the precision are two different metrics to select the important genes, since in the presence of noise the most predictive genes do not completely coincide with those that are related to the phenotype. Based on the synthetic results, FR and SAM are recommended to unravel the biological pathways that are involved in the disease development. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Stir bar sorptive extraction of diclofenac from liquid formulations: a proof of concept study.

    PubMed

    Kole, Prashant Laxman; Millership, Jeff; McElnay, James C

    2011-03-25

    A new stir bar sorptive extraction (SBSE) technique coupled with HPLC-UV method for quantification of diclofenac in pharmaceutical formulations has been developed and validated as a proof of concept study. Commercially available polydimethylsiloxane stir bars (Twister™) were used for method development and SBSE extraction (pH, phase ratio, stirring speed, temperature, ionic strength and time) and liquid desorption (solvents, desorption method, stirring time etc) procedures were optimised. The method was validated as per ICH guidelines and was successfully applied for the estimation of diclofenac from three liquid formulations viz. Voltarol(®) Optha single dose eye drops, Voltarol(®) Ophtha multidose eye drops and Voltarol(®) ampoules. The developed method was found to be linear (r=0.9999) over 100-2000ng/ml concentration range with acceptable accuracy and precision (tested over three QC concentrations). The SBSE extraction recovery of the diclofenac was found to be 70% and the LOD and LOQ of the validated method were found to be 16.06 and 48.68ng/ml, respectively. Furthermore, a forced degradation study of a diclofenac formulation leading to the formation of structurally similar cyclic impurity (indolinone) was carried out. The developed extraction method showed comparable results to that of the reference method, i.e. method was capable of selectively extracting the indolinone and diclofenac from the liquid matrix. Data on inter and intra stir bar accuracy and precision further confirmed robustness of the method, supporting the multiple re-use of the stir bars. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Determination of valproic acid in human plasma using dispersive liquid-liquid microextraction followed by gas chromatography-flame ionization detection

    PubMed Central

    Fazeli-Bakhtiyari, Rana; Panahi-Azar, Vahid; Sorouraddin, Mohammad Hossein; Jouyban, Abolghasem

    2015-01-01

    Objective(s): Dispersive liquid-liquid microextraction coupled with gas chromatography (GC)-flame ionization detector was developed for the determination of valproic acid (VPA) in human plasma. Materials and Methods: Using a syringe, a mixture of suitable extraction solvent (40 µl chloroform) and disperser (1 ml acetone) was quickly added to 10 ml of diluted plasma sample containing VPA (pH, 1.0; concentration of NaCl, 4% (w/v)), resulting in a cloudy solution. After centrifugation (6000 rpm for 6 min), an aliquot (1 µl) of the sedimented organic phase was removed using a 1-µl GC microsyringe and injected into the GC system for analysis. One variable at a time optimization method was used to study various parameters affecting the extraction efficiency of target analyte. Then, the developed method was fully validated for its accuracy, precision, recovery, stability, and robustness. Results: Under the optimum extraction conditions, good linearity range was obtained for the calibration graph, with correlation coefficient higher than 0.998. Limit of detection and lower limit of quantitation were 3.2 and 6 μg/ml, respectively. The relative standard deviations of intra and inter-day analysis of examined compound were less than 11.5%. The relative recoveries were found in the range of 97 to 107.5%. Finally, the validated method was successfully applied to the analysis of VPA in patient sample. Conclusion: The presented method has acceptable levels of precision, accuracy and relative recovery and could be used for therapeutic drug monitoring of VPA in human plasma. PMID:26730332

  18. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions.

    PubMed

    Belal, Tarek S; El-Kafrawy, Dina S; Mahrous, Mohamed S; Abdel-Khalek, Magdi M; Abo-Gharam, Amira H

    2016-02-15

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415nm. The fourth method involves the formation of a yellow complex peaking at 361nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Validated spectrophotometric methods for determination of sodium valproate based on charge transfer complexation reactions

    NASA Astrophysics Data System (ADS)

    Belal, Tarek S.; El-Kafrawy, Dina S.; Mahrous, Mohamed S.; Abdel-Khalek, Magdi M.; Abo-Gharam, Amira H.

    2016-02-01

    This work presents the development, validation and application of four simple and direct spectrophotometric methods for determination of sodium valproate (VP) through charge transfer complexation reactions. The first method is based on the reaction of the drug with p-chloranilic acid (p-CA) in acetone to give a purple colored product with maximum absorbance at 524 nm. The second method depends on the reaction of VP with dichlone (DC) in dimethylformamide forming a reddish orange product measured at 490 nm. The third method is based upon the interaction of VP and picric acid (PA) in chloroform resulting in the formation of a yellow complex measured at 415 nm. The fourth method involves the formation of a yellow complex peaking at 361 nm upon the reaction of the drug with iodine in chloroform. Experimental conditions affecting the color development were studied and optimized. Stoichiometry of the reactions was determined. The proposed spectrophotometric procedures were effectively validated with respect to linearity, ranges, precision, accuracy, specificity, robustness, detection and quantification limits. Calibration curves of the formed color products with p-CA, DC, PA and iodine showed good linear relationships over the concentration ranges 24-144, 40-200, 2-20 and 1-8 μg/mL respectively. The proposed methods were successfully applied to the assay of sodium valproate in tablets and oral solution dosage forms with good accuracy and precision. Assay results were statistically compared to a reference pharmacopoeial HPLC method where no significant differences were observed between the proposed methods and reference method.

  20. Evaluation of a third party enzymatic ammonia method for use on the Roche Cobas 6000 (c501) automated platform.

    PubMed

    Seiden-Long, Isolde; Schnabl, Kareena; Skoropadyk, Wendy; Lennon, Nola; McKeage, Arlayne

    2014-08-01

    Adaptation of the Randox Enzymatic Manual UV Ammonia method to be used on the Roche Cobas 6000 (c501) automated analyzer platform. The Randox ammonia reagent was evaluated for precision, linearity, accuracy and interference from hemolysis, icterus and lipemia on the Roche c501 analyzer. Comparison studies were conducted for the Randox reagent between Roche c501, Siemens Vista, Ortho Vitros 250, and Beckman DxC methods. The Randox reagent demonstrates acceptable within-run (L1=65 μmol/L, CV 3.4% L2=168 μmol/L, CV 1.9%) and between-run precision (L1=29 μmol/L, CV 7.3% L2=102 μmol/L, CV 3.0%), Analytical Measurement Range (7-940 μmol/L), and accuracy. The method interference profile is superior for the Randox method (hemolysis index up to 600, icteric index up to 60, lipemic index up to 100) as compared to the Roche method (hemolysis index up to 200, icteric index up to 10, lipemic index up to 50). Comparison was very good between the Randox reagent and two other wet chemistry platforms. The Randox Enzymatic Manual UV Ammonia reagent is an available alternative to the Roche Cobas c501 reagent. The method is more robust to endogenous interferences and less prone to instrument error flags, thus allowing the majority of clinical specimens to be reported without additional sample handling at our institution. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. Robust inverse-consistent affine CT-MR registration in MRI-assisted and MRI-alone prostate radiation therapy.

    PubMed

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter B; Fripp, Jurgen; Dowling, Jason A

    2015-07-01

    CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning. Copyright © 2015. Published by Elsevier B.V.

  2. Simultaneous determination of kolliphor HS15 and miglyol 812 in microemulsion formulation by ultra-high performance liquid chromatography coupled with nano quantity analyte detector.

    PubMed

    Zhang, Honggen; Wang, Zhenyu; Liu, Oscar

    2016-02-01

    A novel method for simultaneous determination of kolliphor HS15 and miglyol 812 in microemulsion formulation was developed using ultra-high performance liquid chromatography coupled with a nano quantitation analytical detector (UHPLC-NQAD). All components in kolliphor HS15 and miglyol 812 were well separated on an Acquity BEH C 18 column. Mobile phase A was 0.1% trifluoroacetic acid (TFA) in water and mobile phase B was acetonitrile. A gradient elution sequence was programed initially with 60% organic solvent, slowly increased to 100% within 8 min. The flow rate was 0.7 mL/min. Good linearity ( r >0.95) was obtained in the range of 27.6-1381.1 μg/mL for polyoxyl 15 hydroxystearate in kolliphor HS15, 0.8-202.0 μg/mL for caprylic acid triglyceride and 2.7-221.9 μg/mL for capric acid triglyceride in miglyol 812. The relative standard deviations (RSD) ranged from 0.6% to 1.7% for intra-day precision and from 0.4% to 2.7% for inter-day precision. The overall recoveries (accuracy) were 99.7%-101.4% for polyoxyl 15 hydroxystearate in kolliphor HS15, 96.7%-99.6% for caprylic acid triglyceride, and 94.1%-103.3% for capric acid triglyceride in miglyol 812. Quantification limits (QL) were determined as 27.6 μg/mL for polyoxyl 15 hydroxystearate in kolliphor HS15, 0.8 μg/mL for caprylic acid triglyceride, and 2.7 μg/mL for capric acid triglyceride in miglyol 812. No interferences were observed in the retention time ranges of kolliphor HS15 and miglyol 812. The method was validated in terms of specificity, linearity, precision, accuracy, QL, and robustness. The proposed method has been applied to microemulsion formulation analyses with good recoveries (82.2%-103.4%).

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearney, Sean Patrick

    A hybrid fs/ps pure-rotational coherent anti-Stokes Raman scattering (CARS) scheme is systematically evaluated over a wide range of flame conditions in the product gases of two canonical flat-flame burners. Near-transform-limited, broadband femtosecond pump and Stokes pulses impulsively prepare a rotational Raman coherence, which is later probed using a high-energy, frequency-narrow picosecond beam generated by the second-harmonic bandwidth compression scheme that has recently been demonstrated for rotational CARS generation in H 2/air flat flames. The measured spectra are free of collision effects and nonresonant background and can be obtained on a single-shot basis at 1 kHz. The technique is evaluated formore » temperature/oxygen measurements in near-adiabatic H 2/air flames stabilized on the Hencken burner for equivalence ratios of φ = 0.20–1.20. Thermometry is demonstrated in hydrocarbon/air products for φ = 0.75–3.14 in premixed C 2H 4/air flat flames on the McKenna burner. Reliable spectral fitting is demonstrated for both shot-averaged and single-laser-shot data using a simple phenomenological model. Measurement accuracy is benchmarked by comparison to adiabatic-equilibrium calculations for the H 2/air flames, and by comparison with nanosecond CARS measurements for the C 2H 4/air flames. Quantitative accuracy comparable to nanosecond rotational CARS measurements is observed, while the observed precision in both the temperature and oxygen data is extraordinarily high, exceeding nanosecond CARS, and on par with the best published thermometric precision by femtosecond vibrational CARS in flames, and rotational femtosecond CARS at low temperature. Threshold levels of signal-to-noise ratio to achieve 1–2% precision in temperature and O 2/N 2 ratio are identified. Our results show that pure-rotational fs/ps CARS is a robust and quantitative tool when applied across a wide range of flame conditions spanning lean H 2/air combustion to fuel-rich sooting hydrocarbon flames.« less

  4. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles

    PubMed Central

    Meng, Xiaoli

    2017-01-01

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization. PMID:28926996

  5. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles.

    PubMed

    Meng, Xiaoli; Wang, Heng; Liu, Bingbing

    2017-09-18

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.

  6. 40 CFR 98.154 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... equipment and methods (e.g., gas chromatography) with an accuracy and precision of 5 percent or better at... equipment and methods (e.g., gas chromatography) with an accuracy and precision of 5 percent or better at... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING HCFC-22 Production and HFC-23 Destruction § 98.154 Monitoring...

  7. Making and Measuring a Model of a Salt Marsh

    ERIC Educational Resources Information Center

    Fogleman, Tara; Curran, Mary Carla

    2007-01-01

    Students are often confused by the difference between the terms "accuracy" and "precision." In the following activities, students explore the definitions of accuracy and precision while learning about salt march ecology and the methods used by scientists to assess salt marsh health. The activities also address the concept that the ocean supports a…

  8. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  9. Development, optimization and validation of a rapid colorimetric microplate bioassay for neomycin sulfate in pharmaceutical drug products.

    PubMed

    Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Pinto, Terezinha de Jesus Andreoli; Lourenço, Felipe Rebello

    2014-08-01

    Microbiological assays have been used to evaluate antimicrobial activity since the discovery of the first antibiotics. Despite their limitations, microbiological assays are widely employed to determine antibiotic potency of pharmaceutical dosage forms, since they provide a measure of biological activity. The aim of this work is to develop, optimize and validate a rapid colorimetric microplate bioassay for the potency of neomycin in pharmaceutical drug products. Factorial and response surface methodologies were used in the development and optimization of the choice of microorganism, culture medium composition, amount of inoculum, triphenyltetrazolium chloride (TTC) concentration and neomycin concentration. The optimized bioassay method was validated by the assessment of linearity (range 3.0 to 5.0μg/mL, r=0.998 and 0.994 for standard and sample curves, respectively), precision (relative standard deviation (RSD) of 2.8% and 4.0 for repeatability and intermediate precision, respectively), accuracy (mean recovery=100.2%) and robustness. Statistical analysis showed equivalency between agar diffusion microbiological assay and rapid colorimetric microplate bioassay. In addition, microplate bioassay had advantages concerning the sensitivity of response, time of incubation, and amount of culture medium and solutions required. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    PubMed Central

    Macropol, Kathy; Can, Tolga; Singh, Ambuj K

    2009-01-01

    Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW) for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL), and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters. PMID:19740439

  11. A validated HPLC determination of the flavone aglycone diosmetin in human plasma.

    PubMed

    Kanaze, Feras Imad; Bounartzi, Melpomeni I; Niopas, Ioannis

    2004-12-01

    Diosmetin, 3',5,7-trihydroxy-4'-methoxy flavone, is the aglycone of the flavonoid glycoside diosmin that occurs naturally in foods of plant origin. Diosmin exhibits antioxidant and anti-inflammatory activities, improves venous tone and it is used for the treatment of chronic venous insufficiency. Diosmin is hydrolyzed by enzymes of intestinal micro flora before absorption of its aglycone diosmetin. A specific, sensitive, precise, accurate and robust HPLC assay for the determination of diosmetin in human plasma was developed and validated. Diosmetin and the internal standard 7-ethoxycoumarin were isolated from plasma by liquid-liquid extraction and separated on a C8 reversed-phase column with methanol-water-acetic acid (55:43:2, v/v/v) as the mobile phase at 43 degrees C. Peaks were monitored at 344 nm. The method was linear in the 10-300 ng/mL concentration range (r > 0.999). Recovery for diosmetin and internal standard was greater than 89.7 and 86.8%, respectively. Intra-day and inter-day precision for diosmetin ranged from 1.6 to 4.6 and from 2.2 to 5.3%, respectively, and accuracy was better than 97.9%. Copyright 2004 John Wiley & Sons, Ltd.

  12. Analysis of Error Sources in STEP Astrometry

    NASA Astrophysics Data System (ADS)

    Liu, S. Y.; Liu, J. C.; Zhu, Z.

    2017-11-01

    The space telescope Search for Terrestrial Exo-Planets (STEP) employed a method of sub-pixel technology which ensures that the astrometric accuracy of telescope on the focal plane is at the order of 1 μas. This kind of astrometric precision is promising to detect earth-like planets beyond the solar system. In this paper, we analyze the influence of some key factors, including errors in the stellar proper motions, parallax, the optical center of the system, and the velocities and positions of the satellite, on the detection of exo-planets. We propose a relative angular distance method to evaluate the non-linear terms in stellar distance caused by possibly existing exo-planets. This method could avoid the direct influence of measured errors of the position and proper motion of the reference stars. Supposing that there are eight reference stars in the same field of view and a star with a planet system, we simulate their five-year observational data, and use the least square method to get the parameters of the planet orbit. Our results show that the method is robust to detect terrestrial planets based on the 1 μas precision of STEP.

  13. Development and validation of a HPTLC method for simultaneous estimation of lornoxicam and thiocolchicoside in combined dosage form

    PubMed Central

    Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A.; Raut, Rahul P.; Choudhari, Vishnu P.; Kuchekar, Bhanudas S.

    2011-01-01

    Aim: To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. Materials and Methods: The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. Results: The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60–360 ng/band for LOR and 30–180 ng/band for THIO with correlation coefficients r2 = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7–101.2 %. Conclusion: The proposed method was optimized and validated as per the ICH guidelines. PMID:23781452

  14. Development and application of a robust speciation method for determination of six arsenic compounds present in human urine.

    PubMed Central

    Milstein, Lisa S; Essader, Amal; Pellizzari, Edo D; Fernando, Reshan A; Raymer, James H; Levine, Keith E; Akinbo, Olujide

    2003-01-01

    Six arsenic species [arsenate, arsenite, arsenocholine, arsenobetaine, monomethyl arsonic acid, and dimethyl arsinic acid] present in human urine were determined using ion-exchange chromatography combined with inductively coupled plasma mass spectrometry (IC-ICP-MS). Baseline separation was achieved for all six species as well as for the internal standard (potassium hexahydroxy antimonate V) in a single chromatographic run of less than 30 min, using an ammonium carbonate buffer gradient (between 10 and 50 mM) at ambient temperature, in conjunction with cation- and anion-exchange columns in series. The performance of the method was evaluated with respect to linearity, precision, accuracy, and detection limits. This method was applied to determine the concentration of these six arsenic species in human urine samples (n = 251) collected from a population-based exposure assessment survey. Method precision was demonstrated by the analysis of duplicate samples that were prepared over a 2-year analysis period. Total arsenic was also determined for the urine samples using flow injection analysis coupled to ICP-MS. The summed concentration of the arsenic species was compared with the measured arsenic total to demonstrate mass balance. PMID:12611657

  15. Development and Validation of Different Ultraviolet-Spectrophotometric Methods for the Estimation of Besifloxacin in Different Simulated Body Fluids.

    PubMed

    Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K

    2015-01-01

    In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.

  16. Relationships Between the Performance of Time/Frequency Standards and Navigation/Communication Systems

    NASA Technical Reports Server (NTRS)

    Hellwig, H.; Stein, S. R.; Walls, F. L.; Kahan, A.

    1978-01-01

    The relationship between system performance and clock or oscillator performance is discussed. Tradeoffs discussed include: short term stability versus bandwidth requirements; frequency accuracy versus signal acquisition time; flicker of frequency and drift versus resynchronization time; frequency precision versus communications traffic volume; spectral purity versus bit error rate, and frequency standard stability versus frequency selection and adjustability. The benefits and tradeoffs of using precise frequency and time signals are various levels of precision and accuracy are emphasized.

  17. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution tomore » mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.« less

  18. Feasibility of Ultrasound-Based Computational Fluid Dynamics as a Mitral Valve Regurgitation Quantification Technique: Comparison with 2-D and 3-D Proximal Isovelocity Surface Area-Based Methods.

    PubMed

    Jamil, Muhammad; Ahmad, Omar; Poh, Kian Keong; Yap, Choon Hwai

    2017-07-01

    Current Doppler echocardiography quantification of mitral regurgitation (MR) severity has shortcomings. Proximal isovelocity surface area (PISA)-based methods, for example, are unable to account for the fact that ultrasound Doppler can measure only one velocity component: toward or away from the transducer. In the present study, we used ultrasound-based computational fluid dynamics (Ub-CFD) to quantify mitral regurgitation and study its advantages and disadvantages compared with 2-D and 3-D PISA methods. For Ub-CFD, patient-specific mitral valve geometry and velocity data were obtained from clinical ultrasound followed by 3-D CFD simulations at an assumed flow rate. We then obtained the average ratio of the ultrasound Doppler velocities to CFD velocities in the flow convergence region, and scaled CFD flow rate with this ratio as the final measured flow rate. We evaluated Ub-CFD, 2-D PISA and 3-D PISA with an in vitro flow loop, which featured regurgitation flow through (i) a simplified flat plate with round orifice and (ii) a 3-D printed realistic mitral valve and regurgitation orifice. The Ub-CFD and 3-D PISA methods had higher precision than the 2-D PISA method. Ub-CFD had consistent accuracy under all conditions tested, whereas 2-D PISA had the lowest overall accuracy. In vitro investigations indicated that the accuracy of 2-D and 3-D PISA depended significantly on the choice of aliasing velocity. Evaluation of these techniques was also performed for two clinical cases, and the dependency of PISA on aliasing velocity was similarly observed. Ub-CFD was robustly accurate and precise and has promise for future translation to clinical practice. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  19. Application of Genetic Algorithm (GA) Assisted Partial Least Square (PLS) Analysis on Trilinear and Non-trilinear Fluorescence Data Sets to Quantify the Fluorophores in Multifluorophoric Mixtures: Improving Quantification Accuracy of Fluorimetric Estimations of Dilute Aqueous Mixtures.

    PubMed

    Kumar, Keshav

    2018-03-01

    Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.

  20. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  1. Precision time distribution within a deep space communications complex

    NASA Technical Reports Server (NTRS)

    Curtright, J. B.

    1972-01-01

    The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.

  2. Conversion of radius of curvature to power (and vice versa)

    NASA Astrophysics Data System (ADS)

    Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.

    2015-09-01

    Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.

  3. Optimetrics for Precise Navigation

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Heckler, Gregory; Gramling, Cheryl

    2017-01-01

    Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.

  4. A Novel Robust H∞ Filter Based on Krein Space Theory in the SINS/CNS Attitude Reference System

    PubMed Central

    Yu, Fei; Lv, Chongyang; Dong, Qianhui

    2016-01-01

    Owing to their numerous merits, such as compact, autonomous and independence, the strapdown inertial navigation system (SINS) and celestial navigation system (CNS) can be used in marine applications. What is more, due to the complementary navigation information obtained from two different kinds of sensors, the accuracy of the SINS/CNS integrated navigation system can be enhanced availably. Thus, the SINS/CNS system is widely used in the marine navigation field. However, the CNS is easily interfered with by the surroundings, which will lead to the output being discontinuous. Thus, the uncertainty problem caused by the lost measurement will reduce the system accuracy. In this paper, a robust H∞ filter based on the Krein space theory is proposed. The Krein space theory is introduced firstly, and then, the linear state and observation models of the SINS/CNS integrated navigation system are established reasonably. By taking the uncertainty problem into account, in this paper, a new robust H∞ filter is proposed to improve the robustness of the integrated system. At last, this new robust filter based on the Krein space theory is estimated by numerical simulations and actual experiments. Additionally, the simulation and experiment results and analysis show that the attitude errors can be reduced by utilizing the proposed robust filter effectively when the measurements are missing discontinuous. Compared to the traditional Kalman filter (KF) method, the accuracy of the SINS/CNS integrated system is improved, verifying the robustness and the availability of the proposed robust H∞ filter. PMID:26999153

  5. Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping

    NASA Astrophysics Data System (ADS)

    Mostafa, M. M. R.

    2017-08-01

    Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.

  6. 40 CFR Appendix D to Part 136 - Precision and Recovery Statements for Methods for Measuring Metals

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Accuracy Section with the following: Precision and Accuracy An interlaboratory study on metal analyses by... details are found in “USEPA Method Study 7, Analyses for Trace Methods in water by Atomic Absorption... study on metal analyses by this method was conducted by the Quality Assurance Branch (QAB) of the...

  7. A new LC-MS based method to quantitate exogenous recombinant transferrin in cerebrospinal fluid: a potential approach for pharmacokinetic studies of transferrin-based therapeutics in the central nervous system

    PubMed Central

    Wang, Shunhai; Bobst, Cedric E.; Kaltashov, Igor A.

    2018-01-01

    Transferrin (Tf) is an 80 kDa iron-binding protein which is viewed as a promising drug carrier to target the central nervous system due to its ability to penetrate the blood-brain barrier (BBB). Among the many challenges during the development of Tf-based therapeutics, sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult due to the presence of abundant endogenous Tf. Herein, we describe the development of a new LC-MS based method for sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous hTf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed O18-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation. PMID:26307718

  8. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    PubMed

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Location estimation in a smart home: system implementation and evaluation using experimental data.

    PubMed

    Rahal, Youcef; Pigot, Hélène; Mabilleau, Philippe

    2008-01-01

    In the context of a constantly increasing aging population with cognitive deficiencies, insuring the autonomy of the elders at home becomes a priority. The DOMUS laboratory is addressing this issue by conceiving a smart home which can both assist people and preserve their quality of life. Obviously, the ability to monitor properly the occupant's activities and thus provide the pertinent assistance depends highly on location information inside the smart home. This paper proposes a solution to localize the occupant thanks to Bayesian filtering and a set of anonymous sensors disseminated throughout the house. The localization system is designed for a single person inside the house. It could however be used in conjunction with other localization systems in case more people are present. Our solution is functional in real conditions. We conceived an experiment to estimate precisely its accuracy and evaluate its robustness. The experiment consists of a scenario of daily routine meant to maximize the occupant's motion in meaningful activities. It was performed by 14 subjects, one subject at a time. The results are satisfactory: the system's accuracy exceeds 85% and is independent of the occupant's profile. The system works in real time and behaves well in presence of noise.

  10. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  11. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems

    PubMed Central

    Lyu, Weiwei

    2017-01-01

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592

  12. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  13. Accuracy and precision of four value-added blood glucose meters: the Abbott Optium, the DDI Prodigy, the HDI True Track, and the HypoGuard Assure Pro.

    PubMed

    Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A

    2009-09-01

    This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.

  14. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  15. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  16. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder internal rotation, and elbow flexion. However, digital photography was only more precise than goniometry for measurements of elbow flexion. Overall digital photography shows equivalent accuracy to visual estimation and goniometry, but with higher precision than visual estimation. Copyright © 2017. Published by Elsevier B.V.

  17. Automated statistical experimental design approach for rapid separation of coenzyme Q10 and identification of its biotechnological process related impurities using UHPLC and UHPLC-APCI-MS.

    PubMed

    Talluri, Murali V N Kumar; Kalariya, Pradipbhai D; Dharavath, Shireesha; Shaikh, Naeem; Garg, Prabha; Ramisetti, Nageswara Rao; Ragampeta, Srinivas

    2016-09-01

    A novel ultra high performance liquid chromatography method development strategy was ameliorated by applying quality by design approach. The developed systematic approach was divided into five steps (i) Analytical Target Profile, (ii) Critical Quality Attributes, (iii) Risk Assessments of Critical parameters using design of experiments (screening and optimization phases), (iv) Generation of design space, and (v) Process Capability Analysis (Cp) for robustness study using Monte Carlo simulation. The complete quality-by-design-based method development was made automated and expedited by employing sub-2 μm particles column with an ultra high performance liquid chromatography system. Successful chromatographic separation of the Coenzyme Q10 from its biotechnological process related impurities was achieved on a Waters Acquity phenyl hexyl (100 mm × 2.1 mm, 1.7 μm) column with gradient elution of 10 mM ammonium acetate buffer (pH 4.0) and a mixture of acetonitrile/2-propanol (1:1) as the mobile phase. Through this study, fast and organized method development workflow was developed and robustness of the method was also demonstrated. The method was validated for specificity, linearity, accuracy, precision, and robustness in compliance to the International Conference on Harmonization, Q2 (R1) guidelines. The impurities were identified by atmospheric pressure chemical ionization-mass spectrometry technique. Further, the in silico toxicity of impurities was analyzed using TOPKAT and DEREK software. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W. S.

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  20. CD-Based Indices for Link Prediction in Complex Network.

    PubMed

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.

  1. CD-Based Indices for Link Prediction in Complex Network

    PubMed Central

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405

  2. Geometrical accuracy of metallic objects produced with additive or subtractive manufacturing: A comparative in vitro study.

    PubMed

    Braian, Michael; Jönsson, David; Kevci, Mir; Wennerberg, Ann

    2018-07-01

    To evaluate the accuracy and precision of objects produced by additive manufacturing systems (AM) for use in dentistry and to compare with subtractive manufacturing systems (SM). Ten specimens of two geometrical objects were produced by five different AM machines and one SM machine. Object A mimics an inlay-shaped object, while object B imitates a four-unit bridge model. All the objects were sorted into different measurement dimensions (x, y, z), linear distances, angles and corner radius. None of the additive manufacturing or subtractive manufacturing groups presented a perfect match to the CAD file with regard to all parameters included in the present study. Considering linear measurements, the precision for subtractive manufacturing group was consistent in all axes for object A, presenting results of <0.050mm. The additive manufacturing groups had consistent precision in the x-axis and y-axis but not in the z-axis. With regard to corner radius measurements, the SM group had the best overall accuracy and precision for both objects A and B when compared to the AM groups. Within the limitations of this in vitro study, the conclusion can be made that subtractive manufacturing presented overall precision on all measurements below 0.050mm. The AM machines also presented fairly good precision, <0.150mm, on all axes except for the z-axis. Knowledge regarding accuracy and precision for different production techniques utilized in dentistry is of great clinical importance. The dental community has moved from casting to milling and additive techniques are now being implemented. Thus all these production techniques need to be tested, compared and validated. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  3. Robust Decision-making Applied to Model Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define eachmore » of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.« less

  4. Accuracy and precision of two indirect methods for estimating canopy fuels

    Treesearch

    Abran Steele-Feldman; Elizabeth Reinhardt; Russell A. Parsons

    2006-01-01

    We compared the accuracy and precision of digital hemispherical photography and the LI-COR LAI-2000 plant canopy analyzer as predictors of canopy fuels. We collected data on 12 plots in western Montana under a variety of lighting and sky conditions, and used a variety of processing methods to compute estimates. Repeated measurements from each method displayed...

  5. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  6. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    NASA Astrophysics Data System (ADS)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  7. Application of high precision two-way S-band ranging to the navigation of the Galileo Earth encounters

    NASA Technical Reports Server (NTRS)

    Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.

    1993-01-01

    The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.

  8. Robust interferometry against imperfections based on weak value amplification

    NASA Astrophysics Data System (ADS)

    Fang, Chen; Huang, Jing-Zheng; Zeng, Guihua

    2018-06-01

    Optical interferometry has been widely used in various high-precision applications. Usually, the minimum precision of an interferometry is limited by various technical noises in practice. To suppress such kinds of noises, we propose a scheme which combines the weak measurement with the standard interferometry. The proposed scheme dramatically outperforms the standard interferometry in the signal-to-noise ratio and the robustness against noises caused by the optical elements' reflections and the offset fluctuation between two paths. A proof-of-principle experiment is demonstrated to validate the amplification theory.

  9. Breath analysis using external cavity diode lasers: a review

    NASA Astrophysics Data System (ADS)

    Bayrakli, Ismail

    2017-04-01

    Most techniques that are used for diagnosis and therapy of diseases are invasive. Reliable noninvasive methods are always needed for the comfort of patients. Owing to its noninvasiveness, ease of use, and easy repeatability, exhaled breath analysis is a very good candidate for this purpose. Breath analysis can be performed using different techniques, such as gas chromatography mass spectrometry (MS), proton transfer reaction-MS, and selected ion flow tube-MS. However, these devices are bulky and require complicated procedures for sample collection and preconcentration. Therefore, these are not practical for routine applications in hospitals. Laser-based techniques with small size, robustness, low cost, low response time, accuracy, precision, high sensitivity, selectivity, low detection limit, real-time, and point-of-care detection have a great potential for routine use in hospitals. In this review paper, the recent advances in the fields of external cavity lasers and breath analysis for detection of diseases are presented.

  10. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  11. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed

    2013-01-01

    Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137

  12. A wearable computing platform for developing cloud-based machine learning models for health monitoring applications.

    PubMed

    Patel, Shyamal; McGinnis, Ryan S; Silva, Ikaro; DiCristofaro, Steve; Mahadevan, Nikhil; Jortberg, Elise; Franco, Jaime; Martin, Albert; Lust, Joseph; Raj, Milan; McGrane, Bryan; DePetrillo, Paolo; Aranyosi, A J; Ceruolo, Melissa; Pindado, Jesus; Ghaffari, Roozbeh

    2016-08-01

    Wearable sensors have the potential to enable clinical-grade ambulatory health monitoring outside the clinic. Technological advances have enabled development of devices that can measure vital signs with great precision and significant progress has been made towards extracting clinically meaningful information from these devices in research studies. However, translating measurement accuracies achieved in the controlled settings such as the lab and clinic to unconstrained environments such as the home remains a challenge. In this paper, we present a novel wearable computing platform for unobtrusive collection of labeled datasets and a new paradigm for continuous development, deployment and evaluation of machine learning models to ensure robust model performance as we transition from the lab to home. Using this system, we train activity classification models across two studies and track changes in model performance as we go from constrained to unconstrained settings.

  13. Robust design of an inkjet-printed capacitive sensor for position tracking of a MOEMS-mirror in a Michelson interferometer setup

    NASA Astrophysics Data System (ADS)

    Faller, Lisa-Marie; Zangl, Hubert

    2017-05-01

    To guarantee high performance of Micro Optical Electro Mechanical Systems (MOEMS), precise position feedback is crucial. To overcome drawbacks of widely used optical feedback, we propose an inkjet-printed capacitive position sensor as smart packaging solution. Printing processes suffer from tolerances in excess of those from standard processes. Thus, FEM simulations covering assumed tolerances of the system are adopted. These simulations are structured following a Design Of Computer Experiments (DOCE) and are then employed to determine a optimal sensor design. Based on the simulation results, statistical models are adopted for the dynamic system. These models are to be used together with specifically designed hardware, considered to cope with challenging requirements of ≍50nm position accuracy at 10MS/s with 1000μm measurement range. Noise analysis is performed considering the influence of uncertainties to assess resolution and bandwidth capabilities.

  14. Simultaneous Determination of 10 Ultratrace Elements in Infant Formula, Adult Nutritionals, and Milk Products by ICP/MS After Pressure Digestion: Single-Laboratory Validation.

    PubMed

    Dubascoux, Stephane; Nicolas, Marine; Rime, Celine Fragniere; Payot, Janique Richoz; Poitevin, Eric

    2015-01-01

    A single-laboratory validation (SLV) is presented for the simultaneous determination of 10 ultratrace elements (UTEs) including aluminum (Al), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), mercury (Hg), molybdenum (Mo), lead (Pb), selenium (Se), and tin (Sn) in infant formulas, adult nutritionals, and milk based products by inductively coupled plasma (ICP)/MS after acidic pressure digestion. This robust and routine multielemental method is based on several official methods with modifications of sample preparation using either microwave digestion or high pressure ashing and of analytical conditions using ICP/MS with collision cell technology. This SLV fulfills AOAC method performance criteria in terms of linearity, specificity, sensitivity, precision, and accuracy and fully answers most international regulation limits for trace contaminants and/or recommended nutrient levels established for 10 UTEs in targeted matrixes.

  15. Competitive region orientation code for palmprint verification and identification

    NASA Astrophysics Data System (ADS)

    Tang, Wenliang

    2015-11-01

    Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.

  16. Ship detection in optical remote sensing images based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Jiang, Zhiguo; Zhang, Haopeng; Zhao, Danpei; Cai, Bowen

    2017-10-01

    Automatic ship detection in optical remote sensing images has attracted wide attention for its broad applications. Major challenges for this task include the interference of cloud, wave, wake, and the high computational expenses. We propose a fast and robust ship detection algorithm to solve these issues. The framework for ship detection is designed based on deep convolutional neural networks (CNNs), which provide the accurate locations of ship targets in an efficient way. First, the deep CNN is designed to extract features. Then, a region proposal network (RPN) is applied to discriminate ship targets and regress the detection bounding boxes, in which the anchors are designed by intrinsic shape of ship targets. Experimental results on numerous panchromatic images demonstrate that, in comparison with other state-of-the-art ship detection methods, our method is more efficient and achieves higher detection accuracy and more precise bounding boxes in different complex backgrounds.

  17. A broadband cavity enhanced absorption spectrometer for aircraft measurements of glyoxal, methylglyoxal, nitrous acid, nitrogen dioxide, and water vapor

    NASA Astrophysics Data System (ADS)

    Min, K.-E.; Washenfelder, R. A.; Dubé, W. P.; Langford, A. O.; Edwards, P. M.; Zarzana, K. J.; Stutz, J.; Lu, K.; Rohrer, F.; Zhang, Y.; Brown, S. S.

    2015-10-01

    We describe a two-channel broadband cavity enhanced absorption spectrometer (BBCEAS) for aircraft measurements of glyoxal (CHOCHO), methylglyoxal (CH3COCHO), nitrous acid (HONO), nitrogen dioxide (NO2), and water (H2O). The instrument spans 361-389 and 438-468 nm, using two light emitting diodes (LEDs) and a grating spectrometer with a charge-coupled device (CCD) detector. Robust performance is achieved using a custom optical mounting system, high power LEDs with electronic on/off modulation, state-of-the-art cavity mirrors, and materials that minimize analyte surface losses. We have successfully deployed this instrument during two aircraft and two ground-based field campaigns to date. The demonstrated precision (2σ) for retrievals of CHOCHO, HONO and NO2 are 34, 350 and 80 pptv in 5 s. The accuracy is 5.8, 9.0 and 5.0 % limited mainly by the available absorption cross sections.

  18. Progress toward Brazilian cesium fountain second generation

    NASA Astrophysics Data System (ADS)

    Bueno, Caio; Rodriguez Salas, Andrés; Torres Müller, Stella; Bagnato, Vanderlei Salvador; Varela Magalhães, Daniel

    2018-03-01

    The operation of a Cesium fountain primary frequency standard is strongly influenced by the characteristics of two important subsystems. The first is a stable frequency reference and the second is the frequency-transfer system. A stable standard frequency reference is key factor for experiments that require high accuracy and precision. The frequency stability of this reference has a significant impact on the procedures for evaluating certain systematic biases in frequency standards. This paper presents the second generation of the Brazilian Cesium Fountain (Br-CsF) through the opto-mechanical assembly and vacuum chamber to trap atoms. We used a squared section glass profile to build the region where the atoms are trapped and colled by magneto-optical technique. The opto-mechanical system was reduced to increase stability and robustness. This newest Atomic Fountain is essential to contribute with time and frequency development in metrology systems.

  19. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-03-31

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.

  20. Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347

  1. Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.

    PubMed

    Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim

    2017-09-01

    Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.

  3. Method validation for control determination of mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry.

    PubMed

    Torres, Daiane Placido; Martins-Teixeira, Maristela Braga; Cadore, Solange; Queiroz, Helena Müller

    2015-01-01

    A method for the determination of total mercury in fresh fish and shrimp samples by solid sampling thermal decomposition/amalgamation atomic absorption spectrometry (TDA AAS) has been validated following international foodstuff protocols in order to fulfill the Brazilian National Residue Control Plan. The experimental parameters have been previously studied and optimized according to specific legislation on validation and inorganic contaminants in foodstuff. Linearity, sensitivity, specificity, detection and quantification limits, precision (repeatability and within-laboratory reproducibility), robustness as well as accuracy of the method have been evaluated. Linearity of response was satisfactory for the two range concentrations available on the TDA AAS equipment, between approximately 25.0 and 200.0 μg kg(-1) (square regression) and 250.0 and 2000.0 μg kg(-1) (linear regression) of mercury. The residues for both ranges were homoscedastic and independent, with normal distribution. Correlation coefficients obtained for these ranges were higher than 0.995. Limits of quantification (LOQ) and of detection of the method (LDM), based on signal standard deviation (SD) for a low-in-mercury sample, were 3.0 and 1.0 μg kg(-1), respectively. Repeatability of the method was better than 4%. Within-laboratory reproducibility achieved a relative SD better than 6%. Robustness of the current method was evaluated and pointed sample mass as a significant factor. Accuracy (assessed as the analyte recovery) was calculated on basis of the repeatability, and ranged from 89% to 99%. The obtained results showed the suitability of the present method for direct mercury measurement in fresh fish and shrimp samples and the importance of monitoring the analysis conditions for food control purposes. Additionally, the competence of this method was recognized by accreditation under the standard ISO/IEC 17025.

  4. Evaluating Machine Learning Regression Algorithms for Operational Retrieval of Biophysical Parameters: Opportunities for Sentinel

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, J. P.; Alonso, L.; Guanter, L.; Moreno, J.

    2012-04-01

    ESA’s upcoming satellites Sentinel-2 (S2) and Sentinel-3 (S3) aim to ensure continuity for Landsat 5/7, SPOT- 5, SPOT-Vegetation and Envisat MERIS observations by providing superspectral images of high spatial and temporal resolution. S2 and S3 will deliver near real-time operational products with a high accuracy for land monitoring. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods. Machine learning regression algorithms could be powerful candidates for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting. By using data from the ESA-led field campaign SPARC (Barrax, Spain), it was recently found [1] that Gaussian processes regression (GPR) outperformed competitive machine learning algorithms such as neural networks, support vector regression) and kernel ridge regression both in terms of accuracy and computational speed. For various Sentinel configurations (S2-10m, S2- 20m, S2-60m and S3-300m) three important biophysical parameters were estimated: leaf chlorophyll content (Chl), leaf area index (LAI) and fractional vegetation cover (FVC). GPR was the only method that reached the 10% precision required by end users in the estimation of Chl. In view of implementing the regressor into operational monitoring applications, here the portability of locally trained GPR models to other images was evaluated. The associated confidence maps proved to be a good indicator for evaluating the robustness of the trained models. Consistent retrievals were obtained across the different images, particularly over agricultural sites. To make the method suitable for operational use, however, the poorer confidences over bare soil areas suggest that the training dataset should be expanded with inputs from various land cover types.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoegg, Edward D.; Barinaga, Charles J.; Hager, George J.

    ABSTRACT In order to meet a growing need for fieldable mass spectrometer systems for precise elemental and isotopic analyses, the liquid sampling-atmospheric pressure glow discharge (LS-APGD) has a number of very promising characteristics. One key set of attributes that await validation deals with the performance characteristics relative to isotope ratio precision and accuracy. Due to its availability and prior experience with this research team, the initial evaluation of isotope ratio (IR) performance was performed on a Thermo Scientific Exactive Orbitrap instrument. While the mass accuracy and resolution performance for orbitrap analyzers are very well documented, no detailed evaluations of themore » IR performance have been published. Efforts described here involve two variables: the inherent IR precision and accuracy delivered by the LSAPGD microplasma and the inherent IR measurement qualities of orbitrap analyzers. Important to the IR performance, the various operating parameters of the orbitrap sampling interface, HCD dissociation stage, and ion injection/data acquisition have been evaluated. The IR performance for a range of other elements, including natural, depleted, and enriched uranium isotopes was determined. In all cases the precision and accuracy are degraded when measuring low abundance (<0.1% isotope fractions). In the best case, IR precision on the order of 0.1 %RSD can be achieved, with values of 1-3 %RSD observed for low-abundance species. The results suggest that the LSAPGD is a very good candidate for field deployable MS analysis and that the high resolving powers of the orbitrap may be complemented with a here-to-fore unknown capacity to deliver high-precision isotope ratios.« less

  6. Determining oxygen relaxations at an interface: A comparative study between transmission electron microscopy techniques.

    PubMed

    Gauquelin, N; van den Bos, K H W; Béché, A; Krause, F F; Lobato, I; Lazar, S; Rosenauer, A; Van Aert, S; Verbeeck, J

    2017-10-01

    Nowadays, aberration corrected transmission electron microscopy (TEM) is a popular method to characterise nanomaterials at the atomic scale. Here, atomically resolved images of nanomaterials are acquired, where the contrast depends on the illumination, imaging and detector conditions of the microscope. Visualization of light elements is possible when using low angle annular dark field (LAADF) STEM, annular bright field (ABF) STEM, integrated differential phase contrast (iDPC) STEM, negative spherical aberration imaging (NCSI) and imaging STEM (ISTEM). In this work, images of a NdGaO 3 -La 0.67 Sr 0.33 MnO 3 (NGO-LSMO) interface are quantitatively evaluated by using statistical parameter estimation theory. For imaging light elements, all techniques are providing reliable results, while the techniques based on interference contrast, NCSI and ISTEM, are less robust in terms of accuracy for extracting heavy column locations. In term of precision, sample drift and scan distortions mainly limits the STEM based techniques as compared to NCSI. Post processing techniques can, however, partially compensate for this. In order to provide an outlook to the future, simulated images of NGO, in which the unavoidable presence of Poisson noise is taken into account, are used to determine the ultimate precision. In this future counting noise limited scenario, NCSI and ISTEM imaging will provide more precise values as compared to the other techniques, which can be related to the mechanisms behind the image recording. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. NCLscan: accurate identification of non-co-linear transcripts (fusion, trans-splicing and circular RNA) with a good balance between sensitivity and precision.

    PubMed

    Chuang, Trees-Juen; Wu, Chan-Shuo; Chen, Chia-Ying; Hung, Li-Yuan; Chiang, Tai-Wei; Yang, Min-Yu

    2016-02-18

    Analysis of RNA-seq data often detects numerous 'non-co-linear' (NCL) transcripts, which comprised sequence segments that are topologically inconsistent with their corresponding DNA sequences in the reference genome. However, detection of NCL transcripts involves two major challenges: removal of false positives arising from alignment artifacts and discrimination between different types of NCL transcripts (trans-spliced, circular or fusion transcripts). Here, we developed a new NCL-transcript-detecting method ('NCLscan'), which utilized a stepwise alignment strategy to almost completely eliminate false calls (>98% precision) without sacrificing true positives, enabling NCLscan outperform 18 other publicly-available tools (including fusion- and circular-RNA-detecting tools) in terms of sensitivity and precision, regardless of the generation strategy of simulated dataset, type of intragenic or intergenic NCL event, read depth of coverage, read length or expression level of NCL transcript. With the high accuracy, NCLscan was applied to distinguishing between trans-spliced, circular and fusion transcripts on the basis of poly(A)- and nonpoly(A)-selected RNA-seq data. We showed that circular RNAs were expressed more ubiquitously, more abundantly and less cell type-specifically than trans-spliced and fusion transcripts. Our study thus describes a robust pipeline for the discovery of NCL transcripts, and sheds light on the fundamental biology of these non-canonical RNA events in human transcriptome. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Establishment of National Gravity Base Network of Iran

    NASA Astrophysics Data System (ADS)

    Hatam Chavari, Y.; Bayer, R.; Hinderer, J.; Ghazavi, K.; Sedighi, M.; Luck, B.; Djamour, Y.; Le Moign, N.; Saadat, R.; Cheraghi, H.

    2009-04-01

    A gravity base network is supposed to be a set of benchmarks uniformly distributed across the country and the absolute gravity values at the benchmarks are known to the best accessible accuracy. The gravity at the benchmark stations are either measured directly with absolute devices or transferred by gravity difference measurements by gravimeters from known stations. To decrease the accumulation of random measuring errors arising from these transfers, the number of base stations distributed across the country should be as small as possible. This is feasible if the stations are selected near to the national airports long distances apart but faster accessible and measurable by a gravimeter carried in an airplane between the stations. To realize the importance of such a network, various applications of a gravity base network are firstly reviewed. A gravity base network is the required reference frame for establishing 1st , 2nd and 3rd order gravity networks. Such a gravity network is used for the following purposes: a. Mapping of the structure of upper crust in geology maps. The required accuracy for the measured gravity values is about 0.2 to 0.4 mGal. b. Oil and mineral explorations. The required accuracy for the measured gravity values is about 5 µGal. c. Geotechnical studies in mining areas for exploring the underground cavities as well as archeological studies. The required accuracy is about 5 µGal and better. d. Subsurface water resource explorations and mapping crustal layers which absorb it. An accuracy of the same level of previous applications is required here too. e. Studying the tectonics of the Earth's crust. Repeated precise gravity measurements at the gravity network stations can assist us in identifying systematic height changes. The accuracy of the order of 5 µGal and more is required. f. Studying volcanoes and their evolution. Repeated precise gravity measurements at the gravity network stations can provide valuable information on the gradual upward movement of lava. g. Producing precise mean gravity anomaly for precise geoid determination. Replacing precise spirit leveling by the GPS leveling using precise geoid model is one of the forth coming application of the precise geoid. A gravity base network of 28 stations established over Iran. The stations were built mainly at bedrocks. All stations were measured by an FG5 absolute gravimeter, at least 12 hours at each station, to obtain an accuracy of a few micro gals. Several stations were repeated several times during recent years to estimate the gravity changes.

  9. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  10. Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair

    NASA Astrophysics Data System (ADS)

    Sasou, Akira; Kojima, Hiroaki

    2009-12-01

    Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.

  11. LAI-2000 Accuracy, Precision, and Application to Visual Estimation of Leaf Area Index of Loblolly Pine

    Treesearch

    Jason A. Gatch; Timothy B. Harrington; James P. Castleberry

    2002-01-01

    Leaf area index (LAI) is an important parameter of forest stand productivity that has been used to diagnose stand vigor and potential fertilizer response of southern pines. The LAI-2000 was tested for its ability to provide accurate and precise estimates of LAI of loblolly pine (Pinus taeda L.). To test instrument accuracy, regression was used to...

  12. 40 CFR 86.110-90 - Exhaust gas sampling system; diesel vehicles.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... probe. The sensor shall have an accuracy and precision of ±2 °F (1.1 °C). (14) The dilute exhaust gas... probe. The sensor shall have an accuracy and precision of ±2 °F (1.1 °C). (14) The dilute exhaust gas... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Exhaust gas sampling system; diesel...

  13. 40 CFR 86.110-90 - Exhaust gas sampling system; diesel vehicles.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... probe. The sensor shall have an accuracy and precision of ±2 °F (1.1 °C). (14) The dilute exhaust gas... probe. The sensor shall have an accuracy and precision of ±2 °F (1.1 °C). (14) The dilute exhaust gas... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Exhaust gas sampling system; diesel...

  14. Determination of 21 drugs in oral fluid using fully automated supported liquid extraction and UHPLC-MS/MS.

    PubMed

    Valen, Anja; Leere Øiestad, Åse Marit; Strand, Dag Helge; Skari, Ragnhild; Berg, Thomas

    2017-05-01

    Collection of oral fluid (OF) is easy and non-invasive compared to the collection of urine and blood, and interest in OF for drug screening and diagnostic purposes is increasing. A high-throughput ultra-high-performance liquid chromatography-tandem mass spectrometry method for determination of 21 drugs in OF using fully automated 96-well plate supported liquid extraction for sample preparation is presented. The method contains a selection of classic drugs of abuse, including amphetamines, cocaine, cannabis, opioids, and benzodiazepines. The method was fully validated for 200 μL OF/buffer mix using an Intercept OF sampling kit; validation included linearity, sensitivity, precision, accuracy, extraction recovery, matrix effects, stability, and carry-over. Inter-assay precision (RSD) and accuracy (relative error) were <15% and 13 to 5%, respectively, for all compounds at concentrations equal to or higher than the lower limit of quantification. Extraction recoveries were between 58 and 76% (RSD < 8%), except for tetrahydrocannabinol and three 7-amino benzodiazepine metabolites with recoveries between 23 and 33% (RSD between 51 and 52 % and 11 and 25%, respectively). Ion enhancement or ion suppression effects were observed for a few compounds; however, to a large degree they were compensated for by the internal standards used. Deuterium-labelled and 13 C-labelled internal standards were used for 8 and 11 of the compounds, respectively. In a comparison between Intercept and Quantisal OF kits, better recoveries and fewer matrix effects were observed for some compounds using Quantisal. The method is sensitive and robust for its purposes and has been used successfully since February 2015 for analysis of Intercept OF samples from 2600 cases in a 12-month period. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-01

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D0) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing.

  16. Simultaneous determination of dextromethorphan, dextrorphan and doxylamine in human plasma by HPLC coupled to electrospray ionization tandem mass spectrometry: application to a pharmacokinetic study.

    PubMed

    Donato, J L; Koizumi, F; Pereira, A S; Mendes, G D; De Nucci, G

    2012-06-15

    In the present study, a fast, sensitive and robust method to quantify dextromethorphan, dextrorphan and doxylamine in human plasma using deuterated internal standards (IS) is described. The analytes and the IS were extracted from plasma by a liquid-liquid extraction (LLE) using diethyl-ether/hexane (80/20, v/v). Extracted samples were analyzed by high performance liquid chromatography coupled to electrospray ionization tandem mass spectrometry (HPLC-ESI-MS/MS). Chromatographic separation was performed by pumping the mobile phase (acetonitrile/water/formic acid (90/9/1, v/v/v) during 4.0min at a flow-rate of 1.5 mL min⁻¹ into a Phenomenex Gemini® C18, 5 μm analytical column (150 × 4.6 mm i.d.). The calibration curve was linear over the range from 0.2 to 200 ng mL⁻¹ for dextromethorphan and doxylamine and 0.05 to 10 ng mL⁻¹ for dextrorphan. The intra-batch precision and accuracy (%CV) of the method ranged from 2.5 to 9.5%, and 88.9 to 105.1%, respectively. Method inter-batch precision (%CV) and accuracy ranged from 6.7 to 10.3%, and 92.2 to 107.1%, respectively. The run-time was for 4 min. The analytical procedure herein described was used to assess the pharmacokinetics of dextromethorphan, dextrorphan and doxylamine in healthy volunteers after a single oral dose of a formulation containing 30 mg of dextromethorphan hydrobromide and 12.5mg of doxylamine succinate. The method has high sensitivity, specificity and allows high throughput analysis required for a pharmacokinetic study. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Simultaneous determination of chromones and coumarins in Radix Saposhnikoviae by high performance liquid chromatography with diode array and tandem mass detectors.

    PubMed

    Kim, Min Kyung; Yang, Dong-Hyug; Jung, Mihye; Jung, Eun Ha; Eom, Han Young; Suh, Joon Hyuk; Min, Jung Won; Kim, Unyong; Min, Hyeyoung; Kim, Jinwoong; Han, Sang Beom

    2011-09-16

    Methods using high performance liquid chromatography with diode array detection (HPLC-DAD) and tandem mass spectrometry (HPLC-MS/MS) were developed and validated for the simultaneous determination of 5 chromones and 6 coumarins: prim-O-glucosylcimifugin (1), cimifugin (2), nodakenin (3), 4'-O-β-d-glucosyl-5-O-methylvisamminol (4), sec-O-glucosylhamaudol (5), psoralen (6), bergapten (7), imperatorin (8), phellopterin (9), 3'-O-angeloylhamaudol (10) and anomalin (11), in Radix Saposhnikoviae. The separation conditions for HPLC-DAD were optimized using an Ascentis Express C18 (4.6 mm×100 mm, 2.7 μm particle size) fused-core column. The mobile phase was composed of 10% aqueous acetonitrile (A) and 90% acetonitrile (B) and the elution was performed under a gradient mode at a flow rate of 1.0 mL/min. The detection wavelength was set at 300 nm. The HPLC-DAD method yielded a base line separation of the 11 components in 50% methanol extract of Radix Saposhnikoviae with no interfering peaks detected. The HPLC-DAD method was validated in terms of linearity, accuracy and precision (intra- and inter-day), limit of quantification (LOQ), recovery, and robustness. Specific determination of the 11 components was also accomplished by a triple quadrupole tandem mass spectrometer equipped with an electrospray ionization (ESI) source. This HPLC-MS/MS method was also validated by determining the linearity, limit of quantification, accuracy, and precision. Quantification of the 11 components in 51 commercial Radix Saposhnikoviae samples was successfully performed using the developed HPLC-DAD method. The identity, batch-to-batch consistency, and authenticity of Radix Saposhnikoviae were successfully monitored by the proposed HPLC-DAD and HPLC-MS/MS methods. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. High-resolution myocardial T1 mapping using single-shot inversion recovery fast low-angle shot MRI with radial undersampling and iterative reconstruction

    PubMed Central

    Joseph, Arun A; Kalentev, Oleksandr; Merboldt, Klaus-Dietmar; Voit, Dirk; Roeloffs, Volkert B; van Zalk, Maaike; Frahm, Jens

    2016-01-01

    Objective: To develop a novel method for rapid myocardial T1 mapping at high spatial resolution. Methods: The proposed strategy represents a single-shot inversion recovery experiment triggered to early diastole during a brief breath-hold. The measurement combines an adiabatic inversion pulse with a real-time readout by highly undersampled radial FLASH, iterative image reconstruction and T1 fitting with automatic deletion of systolic frames. The method was implemented on a 3-T MRI system using a graphics processing unit-equipped bypass computer for online application. Validations employed a T1 reference phantom including analyses at simulated heart rates from 40 to 100 beats per minute. In vivo applications involved myocardial T1 mapping in short-axis views of healthy young volunteers. Results: At 1-mm in-plane resolution and 6-mm section thickness, the inversion recovery measurement could be shortened to 3 s without compromising T1 quantitation. Phantom studies demonstrated T1 accuracy and high precision for values ranging from 300 to 1500 ms and up to a heart rate of 100 beats per minute. Similar results were obtained in vivo yielding septal T1 values of 1246 ± 24 ms (base), 1256 ± 33 ms (mid-ventricular) and 1288 ± 30 ms (apex), respectively (mean ± standard deviation, n = 6). Conclusion: Diastolic myocardial T1 mapping with use of single-shot inversion recovery FLASH offers high spatial resolution, T1 accuracy and precision, and practical robustness and speed. Advances in knowledge: The proposed method will be beneficial for clinical applications relying on native and post-contrast T1 quantitation. PMID:27759423

  19. Absolute GPS Positioning Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  20. Quantification of mevalonate-5-phosphate using UPLC-MS/MS for determination of mevalonate kinase activity.

    PubMed

    Reitzle, Lukas; Maier, Barbara; Stojanov, Silvia; Teupser, Daniel; Muntau, Ania C; Vogeser, Michael; Gersting, Søren W

    2015-08-01

    Mevalonate kinase deficiency, a rare autosomal recessive autoinflammatory disease, is caused by mutations in the MVK gene encoding mevalonate kinase (MK). MK catalyzes the phosphorylation of mevalonic acid to mevalonate-5-phosphate (MVAP) in the pathway of isoprenoid and sterol synthesis. The disease phenotype correlates with residual activity ranging from <0.5% for mevalonic aciduria to 1-7% for the milder hyperimmunoglobulinemia D and periodic fever syndrome (HIDS). Hence, assessment of loss-of-function requires high accuracy measurements. We describe a method using isotope dilution UPLC-MS/MS for precise and sensitive determination of MK activity. Wild-type MK and the variant V261A, which is associated with HIDS, were recombinantly expressed in Escherichia coli. Enzyme activity was determined by formation of MVAP over time quantified by isotope dilution UPLC-MS/MS. The method was validated according to the FDA Guidance for Bioanalytical Method Validation. Sensitivity for detection of MAVP by UPLC-MS/MS was improved by derivatization with butanol-HCl (LLOQ, 5.0 fmol) and the method was linear from 0.5 to 250 μmol/L (R(2) > 0.99) with a precision of ≥ 89% and an accuracy of ± 2.7%. The imprecision of the activity assay, including the enzymatic reaction and the UPLC-MS/MS quantification, was 8.3%. The variant V261A showed a significantly decreased activity of 53.1%. Accurate determination of MK activity was enabled by sensitive and reproducible detection of MVAP using UPLC-MS/MS. The novel method may improve molecular characterization of MVK mutations, provide robust genotype-phenotype correlations, and accelerate compound screening for drug candidates restoring variant MK activity. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. In situ monitoring using Lab on Chip devices, with particular reference to dissolved silica.

    NASA Astrophysics Data System (ADS)

    Turner, G. S. C.; Loucaides, S.; Slavik, G. J.; Owsianka, D. R.; Beaton, A.; Nightingale, A.; Mowlem, M. C.

    2016-02-01

    In situ sensors are attractive alternatives to discrete sampling of natural waters, offering the potential for sustained long term monitoring and eliminating the need for sample handling. This can reduce sample contamination and degradation. In addition, sensors can be clustered into multi-parameter observatories and networked to provide both spatial and time series coverage. High resolution, low cost, and long term monitoring are the biggest advantages of these technologies to oceanographers. Microfluidic technology miniaturises bench-top assay systems into portable devices, known as a `lab on a chip' (LOC). The principle advantages of this technology are low power consumption, simplicity, speed, and stability without compromising on quality (accuracy, precision, selectivity, sensitivity). We have successfully demonstrated in situ sensors based on this technology for the measurement of pH, nitrate and nitrite. Dissolved silica (dSi) is an important macro-nutrient supporting a major fraction of oceanic primary production carried out by diatoms. The biogeochemical Si cycle is undergoing significant modifications due to human activities, which affects availability of dSi, and consequently primary production. Monitoring dSi concentrations is therefore critical in increasing our understanding of the biogeochemical Si cycle to predict and manage anthropogenic perturbations. The standard bench top air segmented flow technique utilising the reduction of silicomolybdic acid with spectrophotometric detection has been miniaturised into a LOC system; the target limit of detection is 1 nM, with ± 5% accuracy and 3% precision. Results from the assay optimisation are presented along with reagent shelf life to demonstrate the robustness of the chemistry. Laboratory trials of the sensor using ideal solutions and environmental samples in environmentally relevant conditions (temperature, pressure) are discussed, along with an overview of our current LOC analytical capabilities.

  2. Automated extraction and validation of children's gait parameters with the Kinect.

    PubMed

    Motiian, Saeid; Pergami, Paola; Guffey, Keegan; Mancinelli, Corrie A; Doretto, Gianfranco

    2015-12-02

    Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children.

  3. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    PubMed

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes

    NASA Astrophysics Data System (ADS)

    Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.

    2017-11-01

    Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.

  5. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    PubMed

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (p<0.0001). The smartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (p<0.0001) and Mini-FLOTAC (p<0.0001) methods, and the Mini-FLOTAC was significantly more precise than the McMaster (p=0.0228). Mean accuracies for the Mini-FLOTAC, McMaster, and smartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (p<0.0001) and the smartphone system (p<0.0001), while the smartphone and McMaster counts did not have statistically different accuracies. Overall, the smartphone system compared favorably to manual methods with regards to precision, and reasonably with regards to accuracy. With further refinement, this system could become useful in veterinary practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Sensor-Based Inspection of the Formation Accuracy in Ultra-Precision Grinding (UPG) of Aspheric Surface Considering the Chatter Vibration

    NASA Astrophysics Data System (ADS)

    Lei, Yao; Bai, Yue; Xu, Zhijun

    2018-06-01

    This paper proposes an experimental approach for monitoring and inspection of the formation accuracy in ultra-precision grinding (UPG) with respect to the chatter vibration. Two factors related to the grinding progress, the grinding speed of grinding wheel and spindle, and the oil pressure of the hydrostatic bearing are taken into account to determining the accuracy. In the meantime, a mathematical model of the radius deviation caused by the micro vibration is also established and applied in the experiments. The results show that the accuracy is sensitive to the vibration and the forming accuracy is much improved with proper processing parameters. It is found that the accuracy of aspheric surface can be less than 4 μm when the grinding speed is 1400 r/min and the wheel speed is 100 r/min with the oil pressure being 1.1 MPa.

  7. Adaptive nonsingular terminal sliding mode controller for micro/nanopositioning systems driven by linear piezoelectric ceramic motors.

    PubMed

    Safa, Alireza; Abdolmalaki, Reza Yazdanpanah; Shafiee, Saeed; Sadeghi, Behzad

    2018-06-01

    In the field of nanotechnology, there is a growing demand to provide precision control and manipulation of devices with the ability to interact with complex and unstructured environments at micro/nano-scale. As a result, ultrahigh-precision positioning stages have been turned into a key requirement of nanotechnology. In this paper, linear piezoelectric ceramic motors (LPCMs) are adopted to drive micro/nanopositioning stages since they have the ability to achieve high precision in addition to being versatile to be implemented over a wide range of applications. In the establishment of a control scheme for such manipulation systems, the presence of friction, parameter uncertainties, and external disturbances prevent the systems from providing the desired positioning accuracy. The work in this paper focuses on the development of a control framework that addresses these issues as it uses the nonsingular terminal sliding mode technique for the precise position tracking problem of an LPCM-driven positioning stage with friction, uncertain parameters, and external disturbances. The developed control algorithm exhibits the following two attractive features. First, upper bounds of system uncertainties/perturbations are adaptively estimated in the proposed controller; thus, prior knowledge about uncertainty/disturbance bounds is not necessary. Second, the discontinuous signum function is transferred to the time derivative of the control input and the continuous control signal is obtained after integration; consequently, the chattering phenomenon, which presents a major handicap to the implementation of conventional sliding mode control in real applications, is alleviated without deteriorating the robustness of the system. The stability of the controlled system is analyzed, and the convergence of the position tracking error to zero is analytically proven. The proposed control strategy is experimentally validated and compared to the existing control approaches. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  9. A literature review of anthropometric studies of school students for ergonomics purposes: Are accuracy, precision and reliability being considered?

    PubMed

    Bravo, G; Bragança, S; Arezes, P M; Molenbroek, J F M; Castellucci, H I

    2018-05-22

    Despite offering many benefits, direct manual anthropometric measurement method can be problematic due to their vulnerability to measurement errors. The purpose of this literature review was to determine, whether or not the currently published anthropometric studies of school children, related to ergonomics, mentioned or evaluated the variables precision, reliability or accuracy in the direct manual measurement method. Two bibliographic databases, and the bibliographic references of all the selected papers were used for finding relevant published papers in the fields considered in this study. Forty-six (46) studies met the criteria previously defined for this literature review. However, only ten (10) studies mentioned at least one of the analyzed variables, and none has evaluated all of them. Only reliability was assessed by three papers. Moreover, in what regards the factors that affect precision, reliability and accuracy, the reviewed papers presented large differences. This was particularly clear in the instruments used for the measurements, which were not consistent throughout the studies. Additionally, it was also clear that there was a lack of information regarding the evaluators' training and procedures for anthropometric data collection, which are assumed to be the most important issues that affect precision, reliability and accuracy. Based on the review of the literature, it was possible to conclude that the considered anthropometric studies had not focused their attention to the analysis of precision, reliability and accuracy of the manual measurement methods. Hence, and with the aim of avoiding measurement errors and misleading data, anthropometric studies should put more efforts and care on testing measurement error and defining the procedures used to collect anthropometric data.

  10. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  11. A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services

    NASA Astrophysics Data System (ADS)

    Malinowski, Marcin; Kwiecień, Janusz

    2016-12-01

    Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.

  12. Species discrimination of Radix Bupleuri through the simultaneous determination of ten saikosaponins by high performance liquid chromatography with evaporative light scattering detection and electrospray ionization mass spectrometry.

    PubMed

    Lee, Jaehyun; Yang, Dong-Hyug; Suh, Joon Hyuk; Kim, Unyong; Eom, Han Young; Kim, Junghyun; Lee, Mi Young; Kim, Jinwoong; Han, Sang Beom

    2011-12-15

    A simple, rapid and robust high performance liquid chromatography-evaporative light scattering detection (HPLC-ELSD) method was established for the species discrimination and quality evaluation of Radix Bupleuri through the simultaneous determination of ten saikosaponins, namely saikosaponin-a, -b(1), -b(2), -b(3), -b(4), -c, -d, -g, -h, and -i. These compounds were chromatographed on an Ascentis(®) Express C18 column with a gradient elution of acetonitrile and water containing 0.1% acetic acid at a flow rate of 1.0 mL/min. Saikosaponins were monitored by ELSD, which was operated at a 50°C drift tube temperature and 3.0 bar nebulizer gas (N(2)) pressure. The developed method was validated with respect to linearity, intra- and inter-day accuracy and precision, limit of quantification (LOQ), recovery, robustness and stability, thereby showing good precision and accuracy, with intra- and inter-assay coefficients of variation less than 15% at all concentrations. Furthermore, a high performance liquid chromatography-electrospray ionization mass spectrometry (HPLC-ESI-MS) method was developed to certify the existence of ten saikosaponins, as well as to confirm the reliability of ELSD. The extraction conditions of saikosaponins from Radix Bupleuri were also optimized by investigating the effect of extraction methods (sonication, reflux and maceration) and various solvents on the extraction efficiencies for saikosaponins. Sonication with 70% methanol for 40 min was found to be simple and effective for extraction of major saikosaponins. This analytical method was applied to determine saikosaponin profiles in 20 real samples consisting of four Bupleurum species, namely B. falcatum, B. chinense, B. sibiricum and the poisonous B. longiradiatum. It was found that three major saikosaponin-a, -c and -d were the major constituents in B. falcatum, B. chinense, and B. longiradiatum, while one major saikosaponin (saikosaponin-c) was not identified from B. sibiricum. In addition, no saikosaponin-b(3) was detected in B. longiradiatum samples, indicating that the toxic B. longiradiatum may be tentatively distinguished from officially listed Bupleurum species (B. falcatum and B. chinense) based on their saikosaponin profiles. Overall the simultaneous determination of ten saikosaponins in Radix Bupleuri was shown to be a promising tool to adopt for the discrimination and quality control of closely related Bupleurum species. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. A universal approach to determine footfall timings from kinematics of a single foot marker in hoofed animals

    PubMed Central

    Clayton, Hilary M.

    2015-01-01

    The study of animal movement commonly requires the segmentation of continuous data streams into individual strides. The use of forceplates and foot-mounted accelerometers readily allows the detection of the foot-on and foot-off events that define a stride. However, when relying on optical methods such as motion capture, there is lack of validated robust, universally applicable stride event detection methods. To date, no method has been validated for movement on a circle, while algorithms are commonly specific to front/hind limbs or gait. In this study, we aimed to develop and validate kinematic stride segmentation methods applicable to movement on straight line and circle at walk and trot, which exclusively rely on a single, dorsal hoof marker. The advantage of such marker placement is the robustness to marker loss and occlusion. Eight horses walked and trotted on a straight line and in a circle over an array of multiple forceplates. Kinetic events were detected based on the vertical force profile and used as the reference values. Kinematic events were detected based on displacement, velocity or acceleration signals of the dorsal hoof marker depending on the algorithm using (i) defined thresholds associated with derived movement signals and (ii) specific events in the derived movement signals. Method comparison was performed by calculating limits of agreement, accuracy, between-horse precision and within-horse precision based on differences between kinetic and kinematic event. In addition, we examined the effect of force thresholds ranging from 50 to 150 N on the timings of kinetic events. The two approaches resulted in very good and comparable performance: of the 3,074 processed footfall events, 95% of individual foot on and foot off events differed by no more than 26 ms from the kinetic event, with average accuracy between −11 and 10 ms and average within- and between horse precision ≤8 ms. While the event-based method may be less likely to suffer from scaling effects, on soft ground the threshold-based method may prove more valuable. While we found that use of velocity thresholds for foot on detection results in biased event estimates for the foot on the inside of the circle at trot, adjusting thresholds for this condition negated the effect. For the final four algorithms, we found no noteworthy bias between conditions or between front- and hind-foot timings. Different force thresholds in the range of 50 to 150 N had the greatest systematic effect on foot-off estimates in the hind limbs (up to on average 16 ms per condition), being greater than the effect on foot-on estimates or foot-off estimates in the forelimbs (up to on average ±7 ms per condition). PMID:26157641

  14. Quality by design approach for the separation of naproxcinod and its related substances by fused core particle technology column.

    PubMed

    Inugala, Ugandar Reddy; Pothuraju, Nageswara Rao; Vangala, Ranga Reddy

    2013-01-01

    This paper describes the development of a rapid, novel, stability-indicating gradient reversed-phase high-performance liquid chromatographic method and associated system suitability parameters for the analysis of naproxcinod in the presence of its related substances and degradents using a quality-by-design approach. All of the factors that affect the separation of naproxcinod and its impurities and their mutual interactions were investigated and robustness of the method was ensured. The method was developed using an Ascentis Express C8 150 × 4.6 mm, 2.7 µm column with a mobile phase containing a gradient mixture of two solvents. The eluted compounds were monitored at 230 nm, the run time was 20 min within which naproxcinod and its eight impurities were satisfactorily separated. Naproxcinod was subjected to the stress conditions of oxidative, acid, base, hydrolytic, thermal and photolytic degradation. Naproxcinod was found to degrade significantly in acidic and basic conditions and to be stable in thermal, photolytic, oxidative and aqueous degradation conditions. The degradation products were satisfactorily resolved from the primary peak and its impurities, proving the stability-indicating power of the method. The developed method was validated as per International Conference on Harmonization guidelines with respect to specificity, linearity, limit of detection, limit of quantification, accuracy, precision and robustness.

  15. A novel monolithic piezoelectric actuated flexure-mechanism based wire clamp for microelectronic device packaging.

    PubMed

    Liang, Cunman; Wang, Fujun; Tian, Yanling; Zhao, Xingyu; Zhang, Hongjie; Cui, Liangyu; Zhang, Dawei; Ferreira, Placid

    2015-04-01

    A novel monolithic piezoelectric actuated wire clamp is presented in this paper to achieve fast, accurate, and robust microelectronic device packaging. The wire clamp has compact, flexure-based mechanical structure and light weight. To obtain large and robust jaw displacements and ensure parallel jaw grasping, a two-stage amplification composed of a homothetic bridge type mechanism and a parallelogram leverage mechanism was designed. Pseudo-rigid-body model and Lagrange approaches were employed to conduct the kinematic, static, and dynamic modeling of the wire clamp and optimization design was carried out. The displacement amplification ratio, maximum allowable stress, and natural frequency were calculated. Finite element analysis (FEA) was conducted to evaluate the characteristics of the wire clamp and wire electro discharge machining technique was utilized to fabricate the monolithic structure. Experimental tests were carried out to investigate the performance and the experimental results match well with the theoretical calculation and FEA. The amplification ratio of the clamp is 20.96 and the working mode frequency is 895 Hz. Step response test shows that the wire clamp has fast response and high accuracy and the motion resolution is 0.2 μm. High speed precision grasping operations of gold and copper wires were realized using the wire clamper.

  16. Robust Hidden Markov Model based intelligent blood vessel detection of fundus images.

    PubMed

    Hassan, Mehdi; Amin, Muhammad; Murtza, Iqbal; Khan, Asifullah; Chaudhry, Asmatullah

    2017-11-01

    In this paper, we consider the challenging problem of detecting retinal vessel networks. Precise detection of retinal vessel networks is vital for accurate eye disease diagnosis. Most of the blood vessel tracking techniques may not properly track vessels in presence of vessels' occlusion. Owing to problem in sensor resolution or acquisition of fundus images, it is possible that some part of vessel may occlude. In this scenario, it becomes a challenging task to accurately trace these vital vessels. For this purpose, we have proposed a new robust and intelligent retinal vessel detection technique on Hidden Markov Model. The proposed model is able to successfully track vessels in the presence of occlusion. The effectiveness of the proposed technique is evaluated on publically available standard DRIVE dataset of the fundus images. The experiments show that the proposed technique not only outperforms the other state of the art methodologies of retinal blood vessels segmentation, but it is also capable of accurate occlusion handling in retinal vessel networks. The proposed technique offers better average classification accuracy, sensitivity, specificity, and area under the curve (AUC) of 95.7%, 81.0%, 97.0%, and 90.0% respectively, which shows the usefulness of the proposed technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A Robust Self-Alignment Method for Ship's Strapdown INS Under Mooring Conditions

    PubMed Central

    Sun, Feng; Lan, Haiyu; Yu, Chunyang; El-Sheimy, Naser; Zhou, Guangtao; Cao, Tong; Liu, Hang

    2013-01-01

    Strapdown inertial navigation systems (INS) need an alignment process to determine the initial attitude matrix between the body frame and the navigation frame. The conventional alignment process is to compute the initial attitude matrix using the gravity and Earth rotational rate measurements. However, under mooring conditions, the inertial measurement unit (IMU) employed in a ship's strapdown INS often suffers from both the intrinsic sensor noise components and the external disturbance components caused by the motions of the sea waves and wind waves, so a rapid and precise alignment of a ship's strapdown INS without any auxiliary information is hard to achieve. A robust solution is given in this paper to solve this problem. The inertial frame based alignment method is utilized to adapt the mooring condition, most of the periodical low-frequency external disturbance components could be removed by the mathematical integration and averaging characteristic of this method. A novel prefilter named hidden Markov model based Kalman filter (HMM-KF) is proposed to remove the relatively high-frequency error components. Different from the digital filters, the HMM-KF barely cause time-delay problem. The turntable, mooring and sea experiments favorably validate the rapidness and accuracy of the proposed self-alignment method and the good de-noising performance of HMM-KF. PMID:23799492

  18. On-the-fly Locata/inertial navigation system integration for precise maritime application

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Li, Yong; Rizos, Chris

    2013-10-01

    The application of Global Navigation Satellite System (GNSS) technology has meant that marine navigators have greater access to a more consistent and accurate positioning capability than ever before. However, GNSS may not be able to meet all emerging navigation performance requirements for maritime applications with respect to service robustness, accuracy, integrity and availability. In particular, applications in port areas (for example automated docking) and in constricted waterways, have very stringent performance requirements. Even when an integrated inertial navigation system (INS)/GNSS device is used there may still be performance gaps. GNSS signals are easily blocked or interfered with, and sometimes the satellite geometry may not be good enough for high accuracy and high reliability applications. Furthermore, the INS accuracy degrades rapidly during GNSS outages. This paper investigates the use of a portable ground-based positioning system, known as ‘Locata’, which was integrated with an INS, to provide accurate navigation in a marine environment without reliance on GNSS signals. An ‘on-the-fly’ Locata resolution algorithm that takes advantage of geometry change via an extended Kalman filter is proposed in this paper. Single-differenced Locata carrier phase measurements are utilized to achieve accurate and reliable solutions. A ‘loosely coupled’ decentralized Locata/INS integration architecture based on the Kalman filter is used for data processing. In order to evaluate the system performance, a field trial was conducted on Sydney Harbour. A Locata network consisting of eight Locata transmitters was set up near the Sydney Harbour Bridge. The experiment demonstrated that the Locata on-the-fly (OTF) algorithm is effective and can improve the system accuracy in comparison with the conventional ‘known point initialization’ (KPI) method. After the OTF and KPI comparison, the OTF Locata/INS integration is then assessed further and its performance improvement on both stand-alone OTF Locata and INS is shown. The Locata/INS integration can achieve centimetre-level accuracy for position solutions, and centimetre-per-second accuracy for velocity determination.

  19. Electromagnetic tracking (EMT) technology for improved treatment quality assurance in interstitial brachytherapy.

    PubMed

    Kellermeier, Markus; Herbolzheimer, Jens; Kreppner, Stephan; Lotter, Michael; Strnad, Vratislav; Bert, Christoph

    2017-01-01

    Electromagnetic Tracking (EMT) is a novel technique for error detection and quality assurance (QA) in interstitial high dose rate brachytherapy (HDR-iBT). The purpose of this study is to provide a concept for data acquisition developed as part of a clinical evaluation study on the use of EMT during interstitial treatment of breast cancer patients. The stability, accuracy, and precision of EMT-determined dwell positions were quantified. Dwell position reconstruction based on EMT was investigated on CT table, HDR table and PDR bed to examine the influence on precision and accuracy in a typical clinical workflow. All investigations were performed using a precise PMMA phantom. The track of catheters inserted in that phantom was measured by manually inserting a 5 degree of freedom (DoF) sensor while recording the position of three 6DoF fiducial sensors on the phantom surface to correct motion influences. From the corrected data, dwell positions were reconstructed along the catheter's track. The accuracy of the EMT-determined dwell positions was quantified by the residual distances to reference dwell positions after using a rigid registration. Precision and accuracy were investigated for different phantom-table and sensor-field generator (FG) distances. The measured precision of the EMT-determined dwell positions was ≤ 0.28 mm (95th percentile). Stability tests showed a drift of 0.03 mm in the first 20 min of use. Sudden shaking of the FG or (large) metallic objects close to the FG degrade the precision. The accuracy with respect to the reference dwell positions was on all clinical tables < 1 mm at 200 mm FG distance and 120 mm phantom-table distance. Phantom measurements showed that EMT-determined localization of dwell positions in HDR-iBT is stable, precise, and sufficiently accurate for clinical assessment. The presented method may be viable for clinical applications in HDR-iBT, like implant definition, error detection or quantification of uncertainties. Further clinical investigations are needed. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  20. Absolute determination of single-stranded and self-complementary adeno-associated viral vector genome titers by droplet digital PCR.

    PubMed

    Lock, Martin; Alvira, Mauricio R; Chen, Shu-Jen; Wilson, James M

    2014-04-01

    Accurate titration of adeno-associated viral (AAV) vector genome copies is critical for ensuring correct and reproducible dosing in both preclinical and clinical settings. Quantitative PCR (qPCR) is the current method of choice for titrating AAV genomes because of the simplicity, accuracy, and robustness of the assay. However, issues with qPCR-based determination of self-complementary AAV vector genome titers, due to primer-probe exclusion through genome self-annealing or through packaging of prematurely terminated defective interfering (DI) genomes, have been reported. Alternative qPCR, gel-based, or Southern blotting titering methods have been designed to overcome these issues but may represent a backward step from standard qPCR methods in terms of simplicity, robustness, and precision. Droplet digital PCR (ddPCR) is a new PCR technique that directly quantifies DNA copies with an unparalleled degree of precision and without the need for a standard curve or for a high degree of amplification efficiency; all properties that lend themselves to the accurate quantification of both single-stranded and self-complementary AAV genomes. Here we compare a ddPCR-based AAV genome titer assay with a standard and an optimized qPCR assay for the titration of both single-stranded and self-complementary AAV genomes. We demonstrate absolute quantification of single-stranded AAV vector genomes by ddPCR with up to 4-fold increases in titer over a standard qPCR titration but with equivalent readout to an optimized qPCR assay. In the case of self-complementary vectors, ddPCR titers were on average 5-, 1.9-, and 2.3-fold higher than those determined by standard qPCR, optimized qPCR, and agarose gel assays, respectively. Droplet digital PCR-based genome titering was superior to qPCR in terms of both intra- and interassay precision and is more resistant to PCR inhibitors, a desirable feature for in-process monitoring of early-stage vector production and for vector genome biodistribution analysis in inhibitory tissues.

  1. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions decrease dramatically, but there is no evident decrease for the accuracy of GCRE fixed solutions which can still achieve an accuracy of a few centimeters in the east and north components.

  2. Improved blood glucose estimation through multi-sensor fusion.

    PubMed

    Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe

    2011-01-01

    Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.

  3. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  4. Calorimetry of low mass Pu239 items

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cremers, Teresa L; Sampson, Thomas E

    2010-01-01

    Calorimetric assay has the reputation of providing the highest precision and accuracy of all nondestructive assay measurements. Unfortunately, non-destructive assay practitioners and measurement consumers often extend, inappropriately, the high precision and accuracy of calorimetric assay to very low mass items. One purpose of this document is to present more realistic expectations for the random uncertainties associated with calorimetric assay for weapons grade plutonium items with masses of 200 grams or less.

  5. Location Technologies for Apparel Assembly

    DTIC Science & Technology

    1991-09-01

    ADDRESS (Stry, State, and ZIP Code) School of Textile & Fiber Engineering Georgia Institute of Technology Atlanta, Georgia 30332-0295 206 O’Keefe...at a cost of less than $500. A review is also given of state-of-the- art vision systems. These systems have the nccessry- accuracy and precision for...of state-of-the- art vision systems. These systems have the necessary accuracy and precision for apparel manufacturing applications and could

  6. [Precision and accuracy of "a pocket" pulse oximeter in Mexico City].

    PubMed

    Torre-Bouscoulet, Luis; Chávez-Plascencia, Elizabeth; Vázquez-García, Juan Carlos; Pérez-Padilla, Rogelio

    2006-01-01

    Pulse oximeters are frequently used in the clinical practice and we must known their precision and accuracy. The objective was to evaluate the precision and accuracy of a "pocket" pulse oximeter at an altitude of 2,240 m above sea level. We tested miniature pulse oximeters (Onyx 9,500, Nonin Finger Pulse Oximeter) in 96 patients sent to the pulmonary laboratory for an arterial blood sample. Patients were tested with 5 pulse oximeters placed in each of the fingers of the hand oposite to that used for the arterial puncture. The gold standard was the oxygen saturation of the arterial blood sample. Blood samples had SaO2 of 87.2 +/- 11.0 (between 42.2 and 97.9%). Pulse oximeters had a mean error of 0.28 +/- 3.1%. SaO2 = (1.204 x SpO2) - 17.45966 (r = 0.92, p < 0.0001). Intraclass correlation coefficient between each of five pulse oximeters against the arterial blood standard ranged between 0.87 and 0.99. HbCO (2.4 +/- 0.6) did not affect the accuracy. The miniature oximeter Nonin is precise and accurate at 2,240 m of altitude. The observed levels of HbCO did not affect the performance of the equipment. The oximeter good performance, small size and low cost enhances its clinical usefulness.

  7. An Optimized Method to Detect BDS Satellites' Orbit Maneuvering and Anomalies in Real-Time.

    PubMed

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei

    2018-02-28

    The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites' orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites' orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017.

  8. An Optimized Method to Detect BDS Satellites’ Orbit Maneuvering and Anomalies in Real-Time

    PubMed Central

    Huang, Guanwen; Qin, Zhiwei; Zhang, Qin; Wang, Le; Yan, Xingyuan; Wang, Xiaolei

    2018-01-01

    The orbital maneuvers of Global Navigation Satellite System (GNSS) Constellations will decrease the performance and accuracy of positioning, navigation, and timing (PNT). Because satellites in the Chinese BeiDou Navigation Satellite System (BDS) are in Geostationary Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO), maneuvers occur more frequently. Also, the precise start moment of the BDS satellites’ orbit maneuvering cannot be obtained by common users. This paper presented an improved real-time detecting method for BDS satellites’ orbit maneuvering and anomalies with higher timeliness and higher accuracy. The main contributions to this improvement are as follows: (1) instead of the previous two-steps method, a new one-step method with higher accuracy is proposed to determine the start moment and the pseudo random noise code (PRN) of the satellite orbit maneuvering in that time; (2) BDS Medium Earth Orbit (MEO) orbital maneuvers are firstly detected according to the proposed selection strategy for the stations; and (3) the classified non-maneuvering anomalies are detected by a new median robust method using the weak anomaly detection factor and the strong anomaly detection factor. The data from the Multi-GNSS Experiment (MGEX) in 2017 was used for experimental analysis. The experimental results and analysis showed that the start moment of orbital maneuvers and the period of non-maneuver anomalies can be determined more accurately in real-time. When orbital maneuvers and anomalies occur, the proposed method improved the data utilization for 91 and 95 min in 2017. PMID:29495638

  9. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  10. High precision during food recruitment of experienced (reactivated) foragers in the stingless bee Scaptotrigona mexicana (Apidae, Meliponini)

    NASA Astrophysics Data System (ADS)

    Sánchez, Daniel; Nieh, James C.; Hénaut, Yann; Cruz, Leopoldo; Vandame, Rémy

    Several studies have examined the existence of recruitment communication mechanisms in stingless bees. However, the spatial accuracy of location-specific recruitment has not been examined. Moreover, the location-specific recruitment of reactivated foragers, i.e., foragers that have previously experienced the same food source at a different location and time, has not been explicitly examined. However, such foragers may also play a significant role in colony foraging, particularly in small colonies. Here we report that reactivated Scaptotrigona mexicana foragers can recruit with high precision to a specific food location. The recruitment precision of reactivated foragers was evaluated by placing control feeders to the left and the right of the training feeder (direction-precision tests) and between the nest and the training feeder and beyond it (distance-precision tests). Reactivated foragers arrived at the correct location with high precision: 98.44% arrived at the training feeder in the direction trials (five-feeder fan-shaped array, accuracy of at least +/-6° of azimuth at 50 m from the nest), and 88.62% arrived at the training feeder in the distance trials (five-feeder linear array, accuracy of at least +/-5 m or +/-10% at 50 m from the nest). Thus, S. mexicana reactivated foragers can find the indicated food source at a specific distance and direction with high precision, higher than that shown by honeybees, Apis mellifera, which do not communicate food location at such close distances to the nest.

  11. Simultaneous quantification of paracetamol, acetylsalicylic acid and papaverine with a validated HPLC method.

    PubMed

    Kalmár, Eva; Gyuricza, Anett; Kunos-Tóth, Erika; Szakonyi, Gerda; Dombi, György

    2014-01-01

    Combined drug products have the advantages of better patient compliance and possible synergic effects. The simultaneous application of several active ingredients at a time is therefore frequently chosen. However, the quantitative analysis of such medicines can be challenging. The aim of this study is to provide a validated method for the investigation of a multidose packed oral powder that contained acetylsalicylic acid, paracetamol and papaverine-HCl. Reversed-phase high-pressure liquid chromatography was used. The Agilent Zorbax SB-C18 column was found to be the most suitable of the three different stationary phases tested for the separation of the components of this sample. The key parameters in the method development (apart from the nature of the column) were the pH of the aqueous phase (set to 3.4) and the ratio of the organic (acetonitrile) and the aqueous (25 mM phosphate buffer) phases, which was varied from 7:93 (v/v) to 25:75 (v/v) in a linear gradient, preceded by an initial hold. The method was validated: linearity, precision (repeatability and intermediate precision), accuracy, specificity and robustness were all tested, and the results met the ICH guidelines. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

    PubMed Central

    Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387

  13. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

    PubMed

    Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  14. Quantification of neutral human milk oligosaccharides by graphitic carbon HPLC with tandem mass spectrometry

    PubMed Central

    Bao, Yuanwu; Chen, Ceng; Newburg, David S.

    2012-01-01

    Defining the biologic roles of human milk oligosaccharides (HMOS) requires an efficient, simple, reliable, and robust analytical method for simultaneous quantification of oligosaccharide profiles from multiple samples. The HMOS fraction of milk is a complex mixture of polar, highly branched, isomeric structures that contain no intrinsic facile chromophore, making their resolution and quantification challenging. A liquid chromatography-mass spectrometry (LC-MS) method was devised to resolve and quantify 11 major neutral oligosaccharides of human milk simultaneously. Crude HMOS fractions are reduced, resolved by porous graphitic carbon HPLC with a water/acetonitrile gradient, detected by mass spectrometric specific ion monitoring, and quantified. The HPLC separates isomers of identical molecular weights allowing 11 peaks to be fully resolved and quantified by monitoring mass to charge (m/z) ratios of the deprotonated negative ions. The standard curves for each of the 11 oligosaccharides is linear from 0.078 or 0.156 to 20 μg/mL (R2 > 0.998). Precision (CV) ranges from 1% to 9%. Accuracy is from 86% to 104%. This analytical technique provides sensitive, precise, accurate quantification for each of the 11 milk oligosaccharides and allows measurement of differences in milk oligosaccharide patterns between individuals and at different stages of lactation. PMID:23068043

  15. An infrared optical pacing system for high-throughput screening of cardiac electrophysiology in human cardiomyocytes (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McPheeters, Matt T.; Wang, Yves T.; Laurita, Kenneth R.; Jenkins, Michael W.

    2017-02-01

    Cardiomyocytes derived from human induced pluripotent stem cells (hiPS-HCM) have the potential to provide individualized therapies for patients and to test drug candidates for cardiac toxicity. In order for hiPS-CM to be useful for such applications, there is a need for high-throughput technology to rapidly assess cardiac electrophysiology parameters. Here, we designed and tested a fully contactless optical mapping (OM) and optical pacing (OP) system capable of imaging and point stimulation of hiPS-CM in small wells. OM allowed us to characterize cardiac electrophysiological parameters (conduction velocity, action potential duration, etc.) using voltage-sensitive dyes with high temporal and spatial resolution over the entire well. To improve OM signal-to-noise ratio, we tested a new voltage-sensitive dye (Fluovolt) for accuracy and phototoxicity. Stimulation is essential because most electrophysiological parameters are rate dependent; however, traditional methods utilizing electrical stimulation is difficult in small wells. To overcome this limitation, we utilized OP (λ = 1464 nm) to precisely control heart rate with spatial precision without the addition of exogenous agents. We optimized OP parameters (e.g., well size, pulse width, spot size) to achieve robust pacing and minimize the threshold radiant exposure. Finally, we tested system sensitivity using Flecainide, a drug with well described action on multiple electrophysiological properties.

  16. Development and Validation of a HPTLC Method for Simultaneous Estimation of L-Glutamic Acid and γ-Aminobutyric Acid in Mice Brain

    PubMed Central

    Sancheti, J. S.; Shaikh, M. F.; Khatwani, P. F.; Kulkarni, Savita R.; Sathaye, Sadhana

    2013-01-01

    A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment. PMID:24591747

  17. Development and Validation of a HPTLC Method for Simultaneous Estimation of L-Glutamic Acid and γ-Aminobutyric Acid in Mice Brain.

    PubMed

    Sancheti, J S; Shaikh, M F; Khatwani, P F; Kulkarni, Savita R; Sathaye, Sadhana

    2013-11-01

    A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment.

  18. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  19. Intra-retinal segmentation of optical coherence tomography images using active contours with a dynamic programming initialization and an adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Gholami, Peyman; Roy, Priyanka; Kuppuswamy Parthasarathy, Mohana; Ommani, Abbas; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Retinal layer shape and thickness are one of the main indicators in the diagnosis of ocular diseases. We present an active contour approach to localize intra-retinal boundaries of eight retinal layers from OCT images. The initial locations of the active contour curves are determined using a Viterbi dynamic programming method. The main energy function is a Chan-Vese active contour model without edges. A boundary term is added to the energy function using an adaptive weighting method to help curves converge to the retinal layer edges more precisely, after evolving of curves towards boundaries, in final iterations. A wavelet-based denoising method is used to remove speckle from OCT images while preserving important details and edges. The performance of the proposed method was tested on a set of healthy and diseased eye SD-OCT images. The experimental results, compared between the proposed method and the manual segmentation, which was determined by an optometrist, indicate that our method has obtained an average of 95.29%, 92.78%, 95.86%, 87.93%, 82.67%, and 90.25% respectively, for accuracy, sensitivity, specificity, precision, Jaccard Index, and Dice Similarity Coefficient over all segmented layers. These results justify the robustness of the proposed method in determining the location of different retinal layers.

  20. Performance Equivalence and Validation of the Soleris Automated System for Quantitative Microbial Content Testing Using Pure Suspension Cultures.

    PubMed

    Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl

    2016-09-01

    Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.

  1. Stability-indicating UPLC method for determination of Valsartan and their degradation products in active pharmaceutical ingredient and pharmaceutical dosage forms.

    PubMed

    Krishnaiah, Ch; Reddy, A Raghupathi; Kumar, Ramesh; Mukkanti, K

    2010-11-02

    A simple, precise, accurate stability-indicating gradient reverse phase ultra-performance liquid chromatographic (RP-UPLC) method was developed for the quantitative determination of purity of Valsartan drug substance and drug products in bulk samples and pharmaceutical dosage forms in the presence of its impurities and degradation products. The method was developed using Waters Aquity BEH C18 (100 mm x 2.1 mm, 1.7 microm) column with mobile phase containing a gradient mixture of solvents A and B. The eluted compounds were monitored at 225 nm, the run time was within 9.5 min, which Valsartan and its seven impurities were well separated. Valsartan was subjected to the stress conditions of oxidative, acid, base, hydrolytic, thermal and photolytic degradation. Valsartan was found to degrade significantly in acid and oxidative stress conditions and stable in base, hydrolytic and photolytic degradation conditions. The degradation products were well resolved from main peak and its impurities, proving the stability-indicating power of the method. The developed method was validated as per international conference on harmonization (ICH) guidelines with respect to specificity, linearity, limit of detection, limit of quantification, accuracy, precision and robustness. This method was also suitable for the assay determination of Valsartan in pharmaceutical dosage forms.

  2. Development and validation of a reversed-phase HPLC method for simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet dosage form.

    PubMed

    Shaikh, K A; Patil, S D; Devkhile, A B

    2008-12-15

    A simple, precise and accurate reversed-phase liquid chromatographic method has been developed for the simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet formulations. The chromatographic separation was achieved on a Xterra RP18 (250 mm x 4.6 mm, 5 microm) analytical column. A Mixture of acetonitrile-dipotassium phosphate (30 mM) (50:50, v/v) (pH 9.0) was used as the mobile phase, at a flow rate of 1.7 ml/min and detector wavelength at 215 nm. The retention time of ambroxol and azithromycin was found to be 5.0 and 11.5 min, respectively. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation and robustness. The linear dynamic ranges were from 30-180 to 250-1500 microg/ml for ambroxol hydrochloride and azithromycin, respectively. The percentage recovery obtained for ambroxol hydrochloride and azithromycin were 99.40 and 99.90%, respectively. Limit of detection and quantification for azithromycin were 0.8 and 2.3 microg/ml, for ambroxol hydrochloride 0.004 and 0.01 microg/ml, respectively. The developed method can be used for routine quality control analysis of titled drugs in combination in tablet formulation.

  3. Validation of an automated system for aliquoting of HIV-1 Env-pseudotyped virus stocks.

    PubMed

    Schultz, Anke; Germann, Anja; Fuss, Martina; Sarzotti-Kelsoe, Marcella; Ozaki, Daniel A; Montefiori, David C; Zimmermann, Heiko; von Briesen, Hagen

    2018-01-01

    The standardized assessments of HIV-specific immune responses are of main interest in the preclinical and clinical stage of HIV-1 vaccine development. In this regard, HIV-1 Env-pseudotyped viruses play a central role for the evaluation of neutralizing antibody profiles and are produced according to Good Clinical Laboratory Practice- (GCLP-) compliant manual and automated procedures. To further improve and complete the automated production cycle an automated system for aliquoting HIV-1 pseudovirus stocks has been implemented. The automation platform consists of a modified Tecan-based system including a robot platform for handling racks containing 48 cryovials, a Decapper, a tubing pump and a safety device consisting of ultrasound sensors for online liquid level detection of each individual cryovial. With the aim to aliquot the HIV-1 pseudoviruses in an automated manner under GCLP-compliant conditions a validation plan was developed where the acceptance criteria-accuracy, precision as well as the specificity and robustness-were defined and summarized. By passing the validation experiments described in this article the automated system for aliquoting has been successfully validated. This allows the standardized and operator independent distribution of small-scale and bulk amounts of HIV-1 pseudovirus stocks with a precise and reproducible outcome to support upcoming clinical vaccine trials.

  4. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  5. A sensitive and rapid determination of ranitidine in human plasma by HPLC with fluorescence detection and its application for a pharmacokinetic study.

    PubMed

    Ulu, Sevgi Tatar; Tuncel, Muzaffer

    2012-04-01

    A novel precolumn derivatization reversed-phase high-performance liquid chromatography method with fluorescence detection is described for the determination of ranitidine in human plasma. The method was based on the reaction of ranitidine with 4-fluoro-7-nitrobenzo-2-oxa-1,3-diazole forming yellow colored fluorescent product. The separation was achieved on a C(18) column using methanol-water (60:40, v/v) mobile phase. Fluorescence detection was used at the excitation and emission of 458 and 521 nm, respectively. Lisinopril was utilized as an internal standard. The flow rate was 1.2 mL/min. Ranitidine and lisinopril appeared at 3.24 and 2.25 min, respectively. The method was validated for system suitability, precision, accuracy, linearity, limit of detection, limit of quantification, recovery and robustness. Intra- and inter-day precisions of the assays were in the range of 0.01-0.44%. The assay was linear over the concentration range of 50-2000 ng/mL. The mean recovery was determined to be 96.40 ± 0.02%. This method was successfully applied to a pharmacokinetic study after oral administration of a dose (150 mg) of ranitidine. © The Author [2012]. Published by Oxford University Press. All rights reserved.

  6. Supercritical fluid extraction of selected pharmaceuticals from water and serum.

    PubMed

    Simmons, B R; Stewart, J T

    1997-01-24

    Selected drugs from benzodiazepine, anabolic agent and non-steroidal anti-inflammatory drug (NSAID) therapeutic classes were extracted from water and serum using a supercritical CO2 mobile phase. The samples were extracted at a pump pressure of 329 MPa, an extraction chamber temperature of 45 degrees C, and a restrictor temperature of 60 degrees C. The static extraction time for all samples was 2.5 min and the dynamic extraction time ranged from 5 to 20 min. The analytes were collected in appropriate solvent traps and assayed by modified literature HPLC procedures. Analyte recoveries were calculated based on peak height measurements of extracted vs. unextracted analyte. The recovery of the benzodiazepines ranged from 80 to 98% in water and from 75 to 94% in serum. Anabolic drug recoveries from water and serum ranged from 67 to 100% and 70 to 100%, respectively. The NSAIDs were recovered from water in the 76 to 97% range and in the 76 to 100% range from serum. Accuracy, precision and endogenous peak interference, if any, were determined for blank and spiked serum extractions and compared with classical sample preparation techniques of liquid-liquid and solid-phase extraction reported in the literature. For the benzodiazepines, accuracy and precision for supercritical fluid extraction (SFE) ranged from 1.95 to 3.31 and 0.57 to 1.25%, respectively (n = 3). The SFE accuracy and precision data for the anabolic agents ranged from 4.03 to 7.84 and 0.66 to 2.78%, respectively (n = 3). The accuracy and precision data reported for the SFE of the NSAIDs ranged from 2.79 to 3.79 and 0.33 to 1.27%, respectively (n = 3). The precision of the SFE method from serum was shown to be comparable to the precision obtained with other classical preparation techniques.

  7. Preliminary Figures of Merit for Isotope Ratio Measurements: The Liquid Sampling-Atmospheric Pressure Glow Discharge Microplasma Ionization Source Coupled to an Orbitrap Mass Analyzer

    NASA Astrophysics Data System (ADS)

    Hoegg, Edward D.; Barinaga, Charles J.; Hager, George J.; Hart, Garret L.; Koppenaal, David W.; Marcus, R. Kenneth

    2016-08-01

    In order to meet a growing need for fieldable mass spectrometer systems for precise elemental and isotopic analyses, the liquid sampling-atmospheric pressure glow discharge (LS-APGD) has a number of very promising characteristics. One key set of attributes that await validation deals with the performance characteristics relative to isotope ratio precision and accuracy. Owing to its availability and prior experience with this research team, the initial evaluation of isotope ratio (IR) performance was performed on a Thermo Scientific Exactive Orbitrap instrument. While the mass accuracy and resolution performance for Orbitrap analyzers are well-documented, no detailed evaluations of the IR performance have been published. Efforts described here involve two variables: the inherent IR precision and accuracy delivered by the LS-APGD microplasma and the inherent IR measurement qualities of Orbitrap analyzers. Important to the IR performance, the various operating parameters of the Orbitrap sampling interface, high-energy collisional dissociation (HCD) stage, and ion injection/data acquisition have been evaluated. The IR performance for a range of other elements, including natural, depleted, and enriched uranium isotopes was determined. In all cases, the precision and accuracy are degraded when measuring low abundance (<0.1% isotope fractions). In the best case, IR precision on the order of 0.1% RSD can be achieved, with values of 1%-3% RSD observed for low-abundance species. The results suggest that the LS-APGD is a promising candidate for field deployable MS analysis and that the high resolving powers of the Orbitrap may be complemented with a here-to-fore unknown capacity to deliver high-precision IRs.

  8. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  9. Robust control of electrostatic torsional micromirrors using adaptive sliding-mode control

    NASA Astrophysics Data System (ADS)

    Sane, Harshad S.; Yazdi, Navid; Mastrangelo, Carlos H.

    2005-01-01

    This paper presents high-resolution control of torsional electrostatic micromirrors beyond their inherent pull-in instability using robust sliding-mode control (SMC). The objectives of this paper are two-fold - firstly, to demonstrate the applicability of SMC for MEMS devices; secondly - to present a modified SMC algorithm that yields improved control accuracy. SMC enables compact realization of a robust controller tolerant of device characteristic variations and nonlinearities. Robustness of the control loop is demonstrated through extensive simulations and measurements on MEMS with a wide range in their characteristics. Control of two-axis gimbaled micromirrors beyond their pull-in instability with overall 10-bit pointing accuracy is confirmed experimentally. In addition, this paper presents an analysis of the sources of errors in discrete-time implementation of the control algorithm. To minimize these errors, we present an adaptive version of the SMC algorithm that yields substantial performance improvement without considerably increasing implementation complexity.

  10. A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.

    PubMed

    Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian

    2018-01-19

    This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kohen, Hamid

    1997-01-01

    This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

  12. About the inevitable compromise between spatial resolution and accuracy of strain measurement for bone tissue: a 3D zero-strain study.

    PubMed

    Dall'Ara, E; Barber, D; Viceconti, M

    2014-09-22

    The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Effects of precision demands and mental pressure on muscle activation and hand forces in computer mouse tasks.

    PubMed

    Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap

    2004-02-05

    The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.

  14. Beyond H {sub 0} and q {sub 0}: Cosmology is no longer just two numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neben, Abraham R.; Turner, Michael S., E-mail: abrahamn@mit.edu

    2013-06-01

    For decades, H {sub 0} and q {sub 0} were the quest of cosmology, as they promised to characterize our 'world model' without reference to a specific cosmological framework. Using Monte Carlo simulations, we show that q {sub 0} cannot be directly measured using distance indicators with both accuracy (without offset away from its true value) and precision (small error bar). While H {sub 0} can be measured with accuracy and precision, to avoid a small bias in its direct measurement (of order 5%) we demonstrate that the pair H {sub 0} and Ω {sub M} (assuming flatness and wmore » = –1) is a better choice of two parameters, even if our world model is not precisely ΛCDM. We illustrate this with analysis of the Constitution set of supernovae and indirectly infer q {sub 0} = –0.57 ± –0.04. Finally, we show that it may be possible to directly determine q {sub 0} with both accuracy and precision using the time dependence of redshifts ('redshift drift').« less

  15. Spacecraft Attitude Tracking and Maneuver Using Combined Magnetic Actuators

    NASA Technical Reports Server (NTRS)

    Zhou, Zhiqiang

    2012-01-01

    A paper describes attitude-control algorithms using the combination of magnetic actuators with reaction wheel assemblies (RWAs) or other types of actuators such as thrusters. The combination of magnetic actuators with one or two RWAs aligned with different body axis expands the two-dimensional control torque to three-dimensional. The algorithms can guarantee the spacecraft attitude and rates to track the commanded attitude precisely. A design example is presented for nadir-pointing, pitch, and yaw maneuvers. The results show that precise attitude tracking can be reached and the attitude- control accuracy is comparable with RWA-based attitude control. When there are only one or two workable RWAs due to RWA failures, the attitude-control system can switch to the control algorithms for the combined magnetic actuators with the RWAs without going to the safe mode, and the control accuracy can be maintained. The attitude-control algorithms of the combined actuators are derived, which can guarantee the spacecraft attitude and rates to track the commanded values precisely. Results show that precise attitude tracking can be reached, and the attitude-control accuracy is comparable with 3-axis wheel control.

  16. Implementation and results of an integrated data quality assurance protocol in a randomized controlled trial in Uttar Pradesh, India.

    PubMed

    Gass, Jonathon D; Misra, Anamika; Yadav, Mahendra Nath Singh; Sana, Fatima; Singh, Chetna; Mankar, Anup; Neal, Brandon J; Fisher-Bowman, Jennifer; Maisonneuve, Jenny; Delaney, Megan Marx; Kumar, Krishan; Singh, Vinay Pratap; Sharma, Narender; Gawande, Atul; Semrau, Katherine; Hirschhorn, Lisa R

    2017-09-07

    There are few published standards or methodological guidelines for integrating Data Quality Assurance (DQA) protocols into large-scale health systems research trials, especially in resource-limited settings. The BetterBirth Trial is a matched-pair, cluster-randomized controlled trial (RCT) of the BetterBirth Program, which seeks to improve quality of facility-based deliveries and reduce 7-day maternal and neonatal mortality and maternal morbidity in Uttar Pradesh, India. In the trial, over 6300 deliveries were observed and over 153,000 mother-baby pairs across 120 study sites were followed to assess health outcomes. We designed and implemented a robust and integrated DQA system to sustain high-quality data throughout the trial. We designed the Data Quality Monitoring and Improvement System (DQMIS) to reinforce six dimensions of data quality: accuracy, reliability, timeliness, completeness, precision, and integrity. The DQMIS was comprised of five functional components: 1) a monitoring and evaluation team to support the system; 2) a DQA protocol, including data collection audits and targets, rapid data feedback, and supportive supervision; 3) training; 4) standard operating procedures for data collection; and 5) an electronic data collection and reporting system. Routine audits by supervisors included double data entry, simultaneous delivery observations, and review of recorded calls to patients. Data feedback reports identified errors automatically, facilitating supportive supervision through a continuous quality improvement model. The five functional components of the DQMIS successfully reinforced data reliability, timeliness, completeness, precision, and integrity. The DQMIS also resulted in 98.33% accuracy across all data collection activities in the trial. All data collection activities demonstrated improvement in accuracy throughout implementation. Data collectors demonstrated a statistically significant (p = 0.0004) increase in accuracy throughout consecutive audits. The DQMIS was successful, despite an increase from 20 to 130 data collectors. In the absence of widely disseminated data quality methods and standards for large RCT interventions in limited-resource settings, we developed an integrated DQA system, combining auditing, rapid data feedback, and supportive supervision, which ensured high-quality data and could serve as a model for future health systems research trials. Future efforts should focus on standardization of DQA processes for health systems research. ClinicalTrials.gov identifier, NCT02148952 . Registered on 13 February 2014.

  17. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    PubMed

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  18. Water vapor δ17O measurements using an off-axis integrated cavity output spectrometer and seasonal variation in 17O-excess of precipitation in the east-central United States

    NASA Astrophysics Data System (ADS)

    Tian, C.; Wang, L.; Novick, K. A.

    2016-12-01

    High-precision triple oxygen isotope analysis can be used to improve our understanding of multiple hydrological and meteorological processes. Recent studies focus on understanding 17O-excess variation of tropical storms, high-latitude snow and ice-core as well as spatial distribution of meteoric water (tap water). The temporal scale of 17O-excess variation in middle-latitude precipitation is needed to better understand which processes control on the 17O-excess variations. This study focused on assessing how the accuracy and precision of vapor δ17O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. Meanwhile, we presented 17O-excess data from two-year, event based precipitation sampling in the east-central United States. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ2H, δ18O and δ17O measurements. GISP and SLAP2 from IAEA and four working standards were used to evaluate the sensitivity in the three factors. Overall, the accuracy and precision of all isotope measurements were sensitive to concentration, with higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. Precision was also sensitive to the range of delta values, though the effect was not as large when compared to the sensitivity to concentration. The precision was much less sensitive to averaging time when compared with concentration and delta range effects. The preliminary results showed that 17O-excess variation was lower in summer (23±17 per meg) than in winter (34±16 per meg), whereas spring values (30±21 per meg) was similar to fall (29±13 per meg). That means kinetic fractionation influences the isotopic composition and 17O-excess in different seasons.

  19. Study on the position accuracy of a mechanical alignment system

    NASA Astrophysics Data System (ADS)

    Cai, Yimin

    In this thesis, we investigated the precision level and established the baseline achieved by a mechanical alignment system using datums and reference surfaces. The factors which affect the accuracy of mechanical alignment system were studied and methodology was developed to suppress these factors so as to reach its full potential precision. In order to characterize the mechanical alignment system quantitatively, a new optical position monitoring system by using quadrant detectors has been developed in this thesis, it can monitor multi-dimensional degrees of mechanical workpieces in real time with high precision. We studied the noise factors inside the system and optimized the optical system. Based on the fact that one of the major limiting noise factors is the shifting of the laser beam, a noise cancellation technique has been developed successfully to suppress this noise, the feasibility of an ultra high resolution (<20 A) for displacement monitoring has been demonstrated. Using the optical position monitoring system, repeatability experiment of the mechanical alignment system has been conducted on different kinds of samples including steel, aluminum, glass and plastics with the same size 100mm x 130mm. The alignment accuracy was studied quantitatively rather than qualitatively before. In a controlled environment, the alignment precision can be improved 5 folds by securing the datum without other means of help. The alignment accuracy of an aluminum workpiece having reference surface by milling is about 3 times better than by shearing. Also we have found that sample material can have fairly significant effect on the alignment precision of the system. Contamination trapped between the datum and reference surfaces in mechanical alignment system can cause errors of registration or reduce the level of manufacturing precision. In the thesis, artificial and natural dust particles were used to simulate the real situations and their effects on system precision have been investigated. In this experiment, we discovered two effective cleaning processes.

  20. A sharp, robust, and quantitative method by liquid chromatography tandem mass spectrometry for the measurement of EAD for acute radiation syndrome and its application.

    PubMed

    Zhang, Yiwei; Li, Jian; Meng, Zhiyun; Zhu, Xiaoxia; Gan, Hui; Gu, Ruolan; Wu, Zhuona; Zheng, Ying; Wei, Jinbin; Dou, Guifang

    2017-06-15

    17-Ethinyl-3,17-dihydroxyandrost-5-ene (EAD) is an agent designed for the treatment of acute radiation syndrome (ARS). Given its vital role played in the prevention and mitigation of ARS, the development of a sharp, sensitive and robust liquid chromatography tandem mass spectrometry (LC-MS/MS) method to monitor the metabolism of EAD in vivo was crucial. A new method was constructed and validated for the determination of EAD with the internal standard of androst-5-ene-3β,17β-diol (5-AED). The blood samples were precipitated with methanol, centrifuged, from which the supernatant was separated on UPLC with C18 column and eluted in gradient with acetonitrile and Milli-Q water both containing 0.1% formic acid (FA). Quantification was performed by a triple quadrupole mass spectrometer with electro spray ionization (ESI) in multiple reactive monitoring (MRM) positive mode. A good linearity was obtained with R>0.99 for EAD within its calibration range from 5 to 1000ngmL -1 with a lowest limit of quantification (LLOQ) of 5ngmL -1 . Inter- and intra-day accuracy and precision of three levels of quality control (QC) samples were within the range of 15%, while the LLOQ was within 20%. Samples were stable under the circumstances of the experiments. The method was simple, accurate and robust applied to determine the concentrations of EAD in Wistar rat after a single administration of EAD orally at the dose of 100mgkg -1 . Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Real-time robustness evaluation of regression based myoelectric control against arm position change and donning/doffing.

    PubMed

    Hwang, Han-Jeong; Hahne, Janne Mathias; Müller, Klaus-Robert

    2017-01-01

    There are some practical factors, such as arm position change and donning/doffing, which prevent robust myoelectric control. The objective of this study is to precisely characterize the impacts of the two representative factors on myoelectric controllability in practical control situations, thereby providing useful references that can be potentially used to find better solutions for clinically reliable myoelectric control. To this end, a real-time target acquisition task was performed by fourteen subjects including one individual with congenital upper-limb deficiency, where the impacts of arm position change, donning/doffing and a combination of both factors on control performance was systematically evaluated. The changes in online performance were examined with seven different performance metrics to comprehensively evaluate various aspects of myoelectric controllability. As a result, arm position change significantly affects offline prediction accuracy, but not online control performance due to real-time feedback, thereby showing no significant correlation between offline and online performance. Donning/doffing was still problematic in online control conditions. It was further observed that no benefit was attained when using a control model trained with multiple position data in terms of arm position change, and the degree of electrode shift caused by donning/doffing was not severely associated with the degree of performance loss under practical conditions (around 1 cm electrode shift). Since this study is the first to concurrently investigate the impacts of arm position change and donning/doffing in practical myoelectric control situations, all findings of this study provide new insights into robust myoelectric control with respect to arm position change and donning/doffing.

  2. Real-time robustness evaluation of regression based myoelectric control against arm position change and donning/doffing

    PubMed Central

    Hahne, Janne Mathias; Müller, Klaus-Robert

    2017-01-01

    There are some practical factors, such as arm position change and donning/doffing, which prevent robust myoelectric control. The objective of this study is to precisely characterize the impacts of the two representative factors on myoelectric controllability in practical control situations, thereby providing useful references that can be potentially used to find better solutions for clinically reliable myoelectric control. To this end, a real-time target acquisition task was performed by fourteen subjects including one individual with congenital upper-limb deficiency, where the impacts of arm position change, donning/doffing and a combination of both factors on control performance was systematically evaluated. The changes in online performance were examined with seven different performance metrics to comprehensively evaluate various aspects of myoelectric controllability. As a result, arm position change significantly affects offline prediction accuracy, but not online control performance due to real-time feedback, thereby showing no significant correlation between offline and online performance. Donning/doffing was still problematic in online control conditions. It was further observed that no benefit was attained when using a control model trained with multiple position data in terms of arm position change, and the degree of electrode shift caused by donning/doffing was not severely associated with the degree of performance loss under practical conditions (around 1 cm electrode shift). Since this study is the first to concurrently investigate the impacts of arm position change and donning/doffing in practical myoelectric control situations, all findings of this study provide new insights into robust myoelectric control with respect to arm position change and donning/doffing. PMID:29095846

  3. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  4. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  5. Understanding error generation in fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  6. Determination of Monensin in Bovine Tissues: A Bridging Study Comparing the Bioautographic Method (FSIS CLG-MON) with a Liquid Chromatography-Tandem Mass Spectrometry Method (OMA 2011.24).

    PubMed

    Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R

    2018-05-01

    The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.

  7. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  8. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    PubMed Central

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954

  9. Accuracy and precision of 3D cephalometric landmarks from biorthogonal plain-film x rays

    NASA Astrophysics Data System (ADS)

    Dean, David; Palomo, Martin; Subramanyan, Krishna; Hans, Mark G.; Broadbent, B. H., Jr.; Moullas, Alexander; Macaraeg, Omar

    1998-06-01

    Three dimensional (3D) plain film radiographic cephalometric analysis of boney skull landmarks may be used for patient diagnosis, treatment planning, prosthetic design, intra- operatively, and outcome assessment. To test the accuracy and reliability of 50 cephalometric landmarks, three dry human skulls, with and without metallic markers affixed to the landmarks, were digitized in our 3dCEPH software by 4 operators. The average inter-operator variability about mean landmark position, across all operators, for all 3 skull image pairs, was 3.33 mm. Ten landmarks exhibiting least variability were 1.15 mm average distance from the mean, including: B point 0.69 mm, Lower Incisal Edge 0.85 mm, and Anterior Nasal Spine 0.90 mm. The average rms error from the metallic fiducials for these 4 operators across all 50 landmarks, and 3 skulls was 5.03 mm. The 10 landmarks with the least variability exhibited 2.01 mm average distance from the fiducial, including: B point 1.69 mm, upper incisal edge 1.71 mm, lower incisal edge 1.78 mm. Additional studies are needed to test the robusticity of the hypothesis of homologous anatomy. Homology of landmarks is important to cephalometric comparisons between image pairs representing patient and 'normative,' pre- and post-surgical alteration, and different ages of the same patient.

  10. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  11. A new liquid chromatography-mass spectrometry-based method to quantitate exogenous recombinant transferrin in cerebrospinal fluid: a potential approach for pharmacokinetic studies of transferrin-based therapeutics in the central nervous systems.

    PubMed

    Wang, Shunhai; Bobst, Cedric E; Kaltashov, Igor A

    2015-01-01

    Transferrin (Tf) is an 80 kDa iron-binding protein that is viewed as a promising drug carrier to target the central nervous system as a result of its ability to penetrate the blood-brain barrier. Among the many challenges during the development of Tf-based therapeutics, the sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult because of the presence of abundant endogenous Tf. Herein, we describe the development of a new liquid chromatography-mass spectrometry-based method for the sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous human serum Tf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed (18)O-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision, and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation.

  12. Crater Morphometry and Crater Degradation on Mercury: Mercury Laser Altimeter (MLA) Measurements and Comparison to Stereo-DTM Derived Results

    NASA Technical Reports Server (NTRS)

    Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.

    2017-01-01

    Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.

  13. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  14. Optimal sensor placement for time-domain identification using a wavelet-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahdavi, Seyed Hossein; Razak, Hashim Abdul

    2016-06-01

    This paper presents a wavelet-based genetic algorithm strategy for optimal sensor placement (OSP) effective for time-domain structural identification. Initially, the GA-based fitness evaluation is significantly improved by using adaptive wavelet functions. Later, a multi-species decimal GA coding system is modified to be suitable for an efficient search around the local optima. In this regard, a local operation of mutation is introduced in addition with regeneration and reintroduction operators. It is concluded that different characteristics of applied force influence the features of structural responses, and therefore the accuracy of time-domain structural identification is directly affected. Thus, the reliable OSP strategy prior to the time-domain identification will be achieved by those methods dealing with minimizing the distance of simulated responses for the entire system and condensed system considering the force effects. The numerical and experimental verification on the effectiveness of the proposed strategy demonstrates the considerably high computational performance of the proposed OSP strategy, in terms of computational cost and the accuracy of identification. It is deduced that the robustness of the proposed OSP algorithm lies in the precise and fast fitness evaluation at larger sampling rates which result in the optimum evaluation of the GA-based exploration and exploitation phases towards the global optimum solution.

  15. Cross-Sectional HIV Incidence Surveillance: A Benchmarking of Approaches for Estimating the 'Mean Duration of Recent Infection'.

    PubMed

    Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex

    2017-03-01

    The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.

  16. Model-based RSA of a femoral hip stem using surface and geometrical shape models.

    PubMed

    Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M

    2006-07-01

    Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.

  17. Quantitative optical metrology with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.

    2004-08-01

    Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.

  18. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  19. Development and Testing of a High-Precision Position and Attitude Measuring System for a Space Mechanism

    NASA Technical Reports Server (NTRS)

    Khanenya, Nikolay; Paciotti, Gabriel; Forzani, Eugenio; Blecha, Luc

    2016-01-01

    This paper describes a high-precision optical metrology system - a unique ground test equipment which was designed and implemented for simultaneous precise contactless measurements of 6 degrees-of-freedom (3 translational + 3 rotational) of a space mechanism end-effector [1] in a thermally controlled ISO 5 clean environment. The developed contactless method reconstructs both position and attitude of the specimen from three cross-sections measured by 2D distance sensors [2]. The cleanliness is preserved by the hermetic test chamber filled with high purity nitrogen. The specimen's temperature is controlled by the thermostat [7]. The developed method excludes errors caused by the thermal deformations and manufacturing inaccuracies of the test jig. Tests and simulations show that the measurement accuracy of an object absolute position is of 20 micron in in-plane measurement (XY) and about 50 micron out of plane (Z). The typical absolute attitude is determined with an accuracy better than 3 arcmin in rotation around X and Y and better than 10 arcmin in Z. The metrology system is able to determine relative position and movement with an accuracy one order of magnitude lower than the absolute accuracy. Typical relative displacement measurement accuracies are better than 1 micron in X and Y and about 2 micron in Z. Finally, the relative rotation can be measured with accuracy better than 20 arcsec in any direction.

  20. Hybrid fs/ps rotational CARS temperature and oxygen measurements in the product gases of canonical flat flames

    DOE PAGES

    Kearney, Sean Patrick

    2014-12-31

    A hybrid fs/ps pure-rotational coherent anti-Stokes Raman scattering (CARS) scheme is systematically evaluated over a wide range of flame conditions in the product gases of two canonical flat-flame burners. Near-transform-limited, broadband femtosecond pump and Stokes pulses impulsively prepare a rotational Raman coherence, which is later probed using a high-energy, frequency-narrow picosecond beam generated by the second-harmonic bandwidth compression scheme that has recently been demonstrated for rotational CARS generation in H 2/air flat flames. The measured spectra are free of collision effects and nonresonant background and can be obtained on a single-shot basis at 1 kHz. The technique is evaluated formore » temperature/oxygen measurements in near-adiabatic H 2/air flames stabilized on the Hencken burner for equivalence ratios of φ = 0.20–1.20. Thermometry is demonstrated in hydrocarbon/air products for φ = 0.75–3.14 in premixed C 2H 4/air flat flames on the McKenna burner. Reliable spectral fitting is demonstrated for both shot-averaged and single-laser-shot data using a simple phenomenological model. Measurement accuracy is benchmarked by comparison to adiabatic-equilibrium calculations for the H 2/air flames, and by comparison with nanosecond CARS measurements for the C 2H 4/air flames. Quantitative accuracy comparable to nanosecond rotational CARS measurements is observed, while the observed precision in both the temperature and oxygen data is extraordinarily high, exceeding nanosecond CARS, and on par with the best published thermometric precision by femtosecond vibrational CARS in flames, and rotational femtosecond CARS at low temperature. Threshold levels of signal-to-noise ratio to achieve 1–2% precision in temperature and O 2/N 2 ratio are identified. Our results show that pure-rotational fs/ps CARS is a robust and quantitative tool when applied across a wide range of flame conditions spanning lean H 2/air combustion to fuel-rich sooting hydrocarbon flames.« less

Top