Recent developments in heterodyne laser interferometry at Harbin Institute of Technology
NASA Astrophysics Data System (ADS)
Hu, P. C.; Tan, J. B. B.; Yang, H. X. X.; Fu, H. J. J.; Wang, Q.
2013-01-01
In order to fulfill the requirements for high-resolution and high-precision heterodyne interferometric technologies and instruments, the laser interferometry group of HIT has developed some novel techniques for high-resolution and high-precision heterodyne interferometers, such as high accuracy laser frequency stabilization, dynamic sub-nanometer resolution phase interpolation and dynamic nonlinearity measurement. Based on a novel lock point correction method and an asymmetric thermal structure, the frequency stabilized laser achieves a long term stability of 1.2×10-8, and it can be steadily stabilized even in the air flowing up to 1 m/s. In order to achieve dynamic sub-nanometer resolution of laser heterodyne interferometers, a novel phase interpolation method based on digital delay line is proposed. Experimental results show that, the proposed 0.62 nm, phase interpolator built with a 64 multiple PLL and an 8-tap digital delay line achieves a static accuracy better than 0.31nm and a dynamic accuracy better than 0.62 nm over the velocity ranging from -2 m/s to 2 m/s. Meanwhile, an accuracy beam polarization measuring setup is proposed to check and ensure the light's polarization state of the dual frequency laser head, and a dynamic optical nonlinearity measuring setup is built to measure the optical nonlinearity of the heterodyne system accurately and quickly. Analysis and experimental results show that, the beam polarization measuring setup can achieve an accuracy of 0.03° in ellipticity angles and an accuracy of 0.04° in the non-orthogonality angle respectively, and the optical nonlinearity measuring setup can achieve an accuracy of 0.13°.
Liew, Jeffrey; Chen, Qi; Hughes, Jan N.
2009-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children. PMID:20161421
Liew, Jeffrey; Chen, Qi; Hughes, Jan N
2010-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children.
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1982-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.
Interactional Effects of Instructional Quality and Teacher Judgement Accuracy on Achievement.
ERIC Educational Resources Information Center
Helmke, Andreas; Schrader, Friedrich-Wilhelm
1987-01-01
Analysis of predictions of 32 teachers regarding 690 fifth-graders' scores on a mathematics achievement test found that the combination of high judgement accuracy with varied instructional techniques was particularly favorable to students in contrast to a combination of high diagnostic sensitivity with a low frequency of cues or individual…
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1983-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605
ERIC Educational Resources Information Center
Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.
2012-01-01
A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
A Novel Energy-Efficient Approach for Human Activity Recognition.
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru
2017-09-08
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.
Relative Navigation of Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, J. Russell; Grambling, Cheryl
2002-01-01
This paper compares autonomous relative navigation performance for formations in eccentric, medium and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS), crosslink, and celestial object measurements. For close formations, the relative navigation accuracy is highly dependent on the magnitude of the uncorrelated measurement errors. A relative navigation position accuracy of better than 10 centimeters root-mean-square (RMS) can be achieved for medium-altitude formations that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 15 meters RMS can be achieved for high-altitude formations that have sparse tracking of the GPS signals. The addition of crosslink measurements can significantly improve relative navigation accuracy for formations that use sparse GPS tracking or celestial object measurements for absolute navigation.
A Novel Energy-Efficient Approach for Human Activity Recognition
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin
2017-01-01
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560
Current Status of Astrometry Satellite missions in Japan: JASMINE project series
NASA Astrophysics Data System (ADS)
Yano, T.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Hatsutori, Y.; Murooka, J.; Niwa, Y.; Yamada, Y.
Astrometry satellites have common technological issues. (A) Astrometry satellites are required to measure the positions of stars with high accuracy from the huge amount of data during the observational period. (B) The high stabilization of the thermal environment in the telescope is required. (C) The attitude-pointing stability of these satellites with sub-pixel accuracy is also required. Measurement of the positions of stars from a huge amount of data is the essence of astrometry. It is needed to exclude the systematic errors adequately for each image of stars in order to obtain the accurate positions. We have carried out a centroiding experiment for determining the positions of stars from about 10 000 image data. The following two points are important issues for the mission system of JASMINE in order to achieve our aim. For the small-JASMINE, we require the thermal stabilization of the telescope in order to obtain high astrometric accuracy of about 10 micro-arcsec. In order to accomplish a measurement of positions of stars with high accuracy, we must make a model of the distortion of the image on the focal plane with the accuracy of less than 0.1 nm. We have investigated numerically that the above requirement is achieved if the thermal variation is within about 1 K / 0.75 h. We also require the accuracy of the attitude-pointing stability of about 200 mas / 7 s. The utilization of the Tip-tilt mirror will make it possible to achieve such a stable pointing.
Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services
ERIC Educational Resources Information Center
Wang, Guoquan
2013-01-01
High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Ranging performance of satellite laser altimeters
NASA Technical Reports Server (NTRS)
Gardner, Chester S.
1992-01-01
Topographic mapping of the earth, moon and planets can be accomplished with high resolution and accuracy using satellite laser altimeters. These systems employ nanosecond laser pulses and microradian beam divergences to achieve submeter vertical range resolution from orbital altitudes of several hundred kilometers. Here, we develop detailed expressions for the range and pulse width measurement accuracies and use the results to evaluate the ranging performances of several satellite laser altimeters currently under development by NASA for launch during the next decade. Our analysis includes the effects of the target surface characteristics, spacecraft pointing jitter and waveform digitizer characteristics. The results show that ranging accuracy is critically dependent on the pointing accuracy and stability of the altimeter especially over high relief terrain where surface slopes are large. At typical orbital altitudes of several hundred kilometers, single-shot accuracies of a few centimeters can be achieved only when the pointing jitter is on the order of 10 mu rad or less.
NASA Astrophysics Data System (ADS)
Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.
2010-07-01
European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration
Deng, Mingjun; Li, Jiansong
2017-01-01
The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
NASA Astrophysics Data System (ADS)
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Experimental studies of high-accuracy RFID localization with channel impairments
NASA Astrophysics Data System (ADS)
Pauls, Eric; Zhang, Yimin D.
2015-05-01
Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.
COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy
NASA Astrophysics Data System (ADS)
Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun
2014-09-01
In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
NASA Astrophysics Data System (ADS)
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
ERIC Educational Resources Information Center
Farrokhi, Farahman; Sattarpour, Simin
2012-01-01
The present article reports the findings of a study that explored(1) whether direct written corrective feedback (CF) can help high-proficient L2 learners, who has already achieved a rather high level of accuracy in English, improve in the accurate use of two functions of English articles (the use of "a" for first mention and…
An angle encoder for super-high resolution and super-high accuracy using SelfA
NASA Astrophysics Data System (ADS)
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-06-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after interpolation of 212 (= 4096) divisions through the interpolator.
Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie
2017-10-01
By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Evaluation of Relative Navigation Algorithms for Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Kelbel, David; Lee, Taesul; Long, Anne; Carpenter, J. Russell; Gramling, Cheryl
2001-01-01
Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for formations in eccentric, medium, and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS) and intersatellite range measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that the relative navigation accuracy is primarily a function of the frequency of acquisition and tracking of the GPS signals. A relative navigation position accuracy of 0.5 meters root-mean-square (RMS) can be achieved for formations in medium-attitude eccentric orbits that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 75 meters RMS can be achieved for formations in high-altitude eccentric orbits that have sparse tracking of the GPS signals. The addition of round-trip intersatellite range measurements can significantly improve relative navigation accuracy for formations with sparse tracking of the GPS signals.
NASA Astrophysics Data System (ADS)
Dubroca, Guilhem; Richert, Michaël.; Loiseaux, Didier; Caron, Jérôme; Bézy, Jean-Loup
2015-09-01
To increase the accuracy of earth-observation spectro-imagers, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred device in space instrument is the so-called polarization scrambler. It is made of birefringent crystal wedges arranged in a single or dual Babinet. Today, with required radiometric accuracies of the order of 0.1%, it is necessary to develop tools to find optimal and low sensitivity solutions quickly and to measure the performances with a high level of accuracy.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
Exploration of Force Myography and surface Electromyography in hand gesture classification.
Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo
2017-03-01
Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong
2017-05-01
Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Luthcke, Scott; Rowlands, David; Lemoine, Frank; Zelensky, Nikita; Beckley, Brian; Klosko, Steve; Chinn, Doug
2006-01-01
Although satellite altimetry has been around for thirty years, the last fifteen beginning with the launch of TOPEX/Poseidon (TP) have yielded an abundance of significant results including: monitoring of ENS0 events, detection of internal tides, determination of accurate global tides, unambiguous delineation of Rossby waves and their propagation characteristics, accurate determination of geostrophic currents, and a multi-decadal time series of mean sea level trend and dynamic ocean topography variability. While the high level of accuracy being achieved is a result of both instrument maturity and the quality of models and correction algorithms applied to the data, improving the quality of the Climate Data Records produced from altimetry is highly dependent on concurrent progress being made in fields such as orbit determination. The precision orbits form the reference frame from which the radar altimeter observations are made. Therefore, the accuracy of the altimetric mapping is limited to a great extent by the accuracy to which a satellite orbit can be computed. The TP mission represents the first time that the radial component of an altimeter orbit was routinely computed with an accuracy of 2-cm. Recently it has been demonstrated that it is possible to compute the radial component of Jason orbits with an accuracy of better than 1-cm. Additionally, still further improvements in TP orbits are being achieved with new techniques and algorithms largely developed from combined Jason and TP data analysis. While these recent POD achievements are impressive, the new accuracies are now revealing subtle systematic orbit error that manifest as both intra and inter annual ocean topography errors. Additionally the construction of inter-decadal time series of climate data records requires the removal of systematic differences across multiple missions. Current and future efforts must focus on the understanding and reduction of these errors in order to generate a complete and consistent time series of improved orbits across multiple missions and decades required for the most stringent climate-related research. This presentation discusses the POD progress and achievements made over nearly three decades, and presents the future challenges, goals and their impact on altimetric derived ocean sciences.
Demonstration Of Ultra HI-FI (UHF) Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2004-01-01
Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.
High resolution microendoscopy for classification of colorectal polyps.
Chang, S S; Shukla, R; Polydorides, A D; Vila, P M; Lee, M; Han, H; Kedia, P; Lewis, J; Gonzalez, S; Kim, M K; Harpaz, N; Godbold, J; Richards-Kortum, R; Anandasabapathy, S
2013-07-01
It can be difficult to distinguish adenomas from benign polyps during routine colonoscopy. High resolution microendoscopy (HRME) is a novel method for imaging colorectal mucosa with subcellular detail. HRME criteria for the classification of colorectal neoplasia have not been previously described. Study goals were to develop criteria to characterize HRME images of colorectal mucosa (normal, hyperplastic polyps, adenomas, cancer) and to determine the accuracy and interobserver variability for the discrimination of neoplastic from non-neoplastic polyps when these criteria were applied by novice and expert microendoscopists. Two expert pathologists created consensus HRME image criteria using images from 68 patients with polyps who had undergone colonoscopy plus HRME. Using these criteria, HRME expert and novice microendoscopists were shown a set of training images and then tested to determine accuracy and interobserver variability. Expert microendoscopists identified neoplasia with sensitivity, specificity, and accuracy of 67 % (95 % confidence interval [CI] 58 % - 75 %), 97 % (94 % - 100 %), and 87 %, respectively. Nonexperts achieved sensitivity, specificity, and accuracy of 73 % (66 % - 80 %), 91 % (80 % - 100 %), and 85 %, respectively. Overall, neoplasia were identified with sensitivity 70 % (65 % - 76 %), specificity 94 % (87 % - 100 %), and accuracy 85 %. Kappa values were: experts 0.86; nonexperts 0.72; and overall 0.78. Using the new criteria, observers achieved high specificity and substantial interobserver agreement for distinguishing benign polyps from neoplasia. Increased expertise in HRME imaging improves accuracy. This low-cost microendoscopic platform may be an alternative to confocal microendoscopy in lower-resource or community-based settings.
NASA Astrophysics Data System (ADS)
Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu
2017-05-01
In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.
Critical thinking and accuracy of nurses' diagnoses.
Lunney, Margaret
2003-01-01
Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.
Adaptive hybrid brain-computer interaction: ask a trainer for assistance!
Müller-Putz, Gernot R; Steyrl, David; Faller, Josef
2014-01-01
In applying mental imagery brain-computer interfaces (BCIs) to end users, training is a key part for novice users to get control. In general learning situations, it is an established concept that a trainer assists a trainee to improve his/her aptitude in certain skills. In this work, we want to evaluate whether we can apply this concept in the context of event-related desynchronization (ERD) based, adaptive, hybrid BCIs. Hence, in a first session we merged the features of a high aptitude BCI user, a trainer, and a novice user, the trainee, in a closed-loop BCI feedback task and automatically adapted the classifier over time. In a second session the trainees operated the system unassisted. Twelve healthy participants ran through this protocol. Along with the trainer, the trainees achieved a very high overall peak accuracy of 95.3 %. In the second session, where users operated the BCI unassisted, they still achieved a high overall peak accuracy of 83.6%. Ten of twelve first time BCI users successfully achieved significantly better than chance accuracy. Concluding, we can say that this trainer-trainee approach is very promising. Future research should investigate, whether this approach is superior to conventional training approaches. This trainer-trainee concept could have potential for future application of BCIs to end users.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Li, Chuanhao; Peng, Gaoliang; Chen, Yuanhang; Zhang, Zhujun
2018-02-01
In recent years, intelligent fault diagnosis algorithms using machine learning technique have achieved much success. However, due to the fact that in real world industrial applications, the working load is changing all the time and noise from the working environment is inevitable, degradation of the performance of intelligent fault diagnosis methods is very serious. In this paper, a new model based on deep learning is proposed to address the problem. Our contributions of include: First, we proposed an end-to-end method that takes raw temporal signals as inputs and thus doesn't need any time consuming denoising preprocessing. The model can achieve pretty high accuracy under noisy environment. Second, the model does not rely on any domain adaptation algorithm or require information of the target domain. It can achieve high accuracy when working load is changed. To understand the proposed model, we will visualize the learned features, and try to analyze the reasons behind the high performance of the model.
Training and quality assurance with the Structured Clinical Interview for DSM-IV (SCID-I/P).
Ventura, J; Liberman, R P; Green, M F; Shaner, A; Mintz, J
1998-06-15
Accuracy in psychiatric diagnosis is critical for evaluating the suitability of the subjects for entry into research protocols and for establishing comparability of findings across study sites. However, training programs in the use of diagnostic instruments for research projects are not well systematized. Furthermore, little information has been published on the maintenance of interrater reliability of diagnostic assessments. At the UCLA Research Center for Major Mental Illnesses, a Training and Quality Assurance Program for SCID interviewers was used to evaluate interrater reliability and diagnostic accuracy. Although clinically experienced interviewers achieved better interrater reliability and overall diagnostic accuracy than neophyte interviewers, both groups were able to achieve and maintain high levels of interrater reliability, diagnostic accuracy, and interviewer skill. At the first quality assurance check after training, there were no significant differences between experienced and neophyte interviewers in interrater reliability or diagnostic accuracy. Standardization of training and quality assurance procedures within and across research projects may make research findings from study sites more comparable.
NASA Astrophysics Data System (ADS)
Yasui, Takeshi
2017-08-01
Optical frequency combs are innovative tools for broadband spectroscopy because a series of comb modes can serve as frequency markers that are traceable to a microwave frequency standard. However, a mode distribution that is too discrete limits the spectral sampling interval to the mode frequency spacing even though individual mode linewidth is sufficiently narrow. Here, using a combination of a spectral interleaving and dual-comb spectroscopy in the terahertz (THz) region, we achieved a spectral sampling interval equal to the mode linewidth rather than the mode spacing. The spectrally interleaved THz comb was realized by sweeping the laser repetition frequency and interleaving additional frequency marks. In low-pressure gas spectroscopy, we achieved an improved spectral sampling density of 2.5 MHz and enhanced spectral accuracy of 8.39 × 10-7 in the THz region. The proposed method is a powerful tool for simultaneously achieving high resolution, high accuracy, and broad spectral coverage in THz spectroscopy.
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
NASA Astrophysics Data System (ADS)
Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao
2017-03-01
Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
ECG Sensor Card with Evolving RBP Algorithms for Human Verification.
Tseng, Kuo-Kun; Huang, Huang-Nan; Zeng, Fufu; Tu, Shu-Yi
2015-08-21
It is known that cardiac and respiratory rhythms in electrocardiograms (ECGs) are highly nonlinear and non-stationary. As a result, most traditional time-domain algorithms are inadequate for characterizing the complex dynamics of the ECG. This paper proposes a new ECG sensor card and a statistical-based ECG algorithm, with the aid of a reduced binary pattern (RBP), with the aim of achieving faster ECG human identity recognition with high accuracy. The proposed algorithm has one advantage that previous ECG algorithms lack-the waveform complex information and de-noising preprocessing can be bypassed; therefore, it is more suitable for non-stationary ECG signals. Experimental results tested on two public ECG databases (MIT-BIH) from MIT University confirm that the proposed scheme is feasible with excellent accuracy, low complexity, and speedy processing. To be more specific, the advanced RBP algorithm achieves high accuracy in human identity recognition and is executed at least nine times faster than previous algorithms. Moreover, based on the test results from a long-term ECG database, the evolving RBP algorithm also demonstrates superior capability in handling long-term and non-stationary ECG signals.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Neurocognitive and Behavioral Predictors of Math Performance in Children with and without ADHD
Antonini, Tanya N.; O’Brien, Kathleen M.; Narad, Megan E.; Langberg, Joshua M.; Tamm, Leanne; Epstein, Jeff N.
2014-01-01
Objective: This study examined neurocognitive and behavioral predictors of math performance in children with and without attention-deficit/hyperactivity disorder (ADHD). Method: Neurocognitive and behavioral variables were examined as predictors of 1) standardized mathematics achievement scores,2) productivity on an analog math task, and 3) accuracy on an analog math task. Results: Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the Attentional Network Task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Conclusion: Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. PMID:24071774
Neurocognitive and Behavioral Predictors of Math Performance in Children With and Without ADHD.
Antonini, Tanya N; Kingery, Kathleen M; Narad, Megan E; Langberg, Joshua M; Tamm, Leanne; Epstein, Jeffery N
2016-02-01
This study examined neurocognitive and behavioral predictors of math performance in children with and without ADHD. Neurocognitive and behavioral variables were examined as predictors of (a) standardized mathematics achievement scores, (b) productivity on an analog math task, and (c) accuracy on an analog math task. Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the attentional network task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. © The Author(s) 2013.
Music Identification System Using MPEG-7 Audio Signature Descriptors
You, Shingchern D.; Chen, Wei-Hwa; Chen, Woei-Kae
2013-01-01
This paper describes a multiresolution system based on MPEG-7 audio signature descriptors for music identification. Such an identification system may be used to detect illegally copied music circulated over the Internet. In the proposed system, low-resolution descriptors are used to search likely candidates, and then full-resolution descriptors are used to identify the unknown (query) audio. With this arrangement, the proposed system achieves both high speed and high accuracy. To deal with the problem that a piece of query audio may not be inside the system's database, we suggest two different methods to find the decision threshold. Simulation results show that the proposed method II can achieve an accuracy of 99.4% for query inputs both inside and outside the database. Overall, it is highly possible to use the proposed system for copyright control. PMID:23533359
NASA Astrophysics Data System (ADS)
Shimura, Kazuo; Nakajima, Nobuyoshi; Tanaka, Hiroshi; Ishida, Masamitsu; Kato, Hisatoyo
1993-09-01
Dual-energy X-ray absorptiometry (DXA) is one of the bone densitometry techniques to diagnose osteoporosis, and has been gradually getting popular due to its high degree of precision. However, DXA involves a time-consuming examination because of its pencil-beam scan, and the equipment is expensive. In this study, we examined a new bone densitometry technique (CR-DXA) utilizing an X-ray imaging system and Computed Radiography (CR) used for medical X-ray image diagnosis. High level of measurement precision and accuracy could be achieved by X-ray rube voltage/filter optimization and various nonuniformity corrections based on simulation and experiment. The phantom study using a bone mineral block showed precision of 0.83% c.v. (coefficient of variation), and accuracy of 0.01 g/cm2, suggesting that a practically equivalent degree of measurement precision and accuracy to that of the DXA approach is achieved. CR-DXA is considered to provide bone mineral densitometry to facilitate simple, quick and precise bone mineral density measurement.
A Survey of the Isentropic Euler Vortex Problem Using High-Order Methods
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
The flux reconstruction (FR) method offers a simple, efficient, and easy to implement method, and it has been shown to equate to a differential approach to discontinuous Galerkin (DG) methods. The FR method is also accurate to an arbitrary order and the isentropic Euler vortex problem is used here to empirically verify this claim. This problem is widely used in computational fluid dynamics (CFD) to verify the accuracy of a given numerical method due to its simplicity and known exact solution at any given time. While verifying our FR solver, multiple obstacles emerged that prevented us from achieving the expected order of accuracy over short and long amounts of simulation time. It was found that these complications stemmed from a few overlooked details in the original problem definition combined with the FR and DG methods achieving high-accuracy with minimal dissipation. This paper is intended to consolidate the many versions of the vortex problem found in literature and to highlight some of the consequences if these overlooked details remain neglected.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
High-accurate optical vector analysis based on optical single-sideband modulation
NASA Astrophysics Data System (ADS)
Xue, Min; Pan, Shilong
2016-11-01
Most of the efforts devoted to the area of optical communications were on the improvement of the optical spectral efficiency. Varies innovative optical devices are thus developed to finely manipulate the optical spectrum. Knowing the spectral responses of these devices, including the magnitude, phase and polarization responses, is of great importance for their fabrication and application. To achieve high-resolution characterization, optical vector analyzers (OVAs) based on optical single-sideband (OSSB) modulation have been proposed and developed. Benefiting from the mature and highresolution microwave technologies, the OSSB-based OVA can potentially achieve a resolution of sub-Hz. However, the accuracy is restricted by the measurement errors induced by the unwanted first-order sideband and the high-order sidebands in the OSSB signal, since electrical-to-optical conversion and optical-to-electrical conversion are essentially required to achieve high-resolution frequency sweeping and extract the magnitude and phase information in the electrical domain. Recently, great efforts have been devoted to improve the accuracy of the OSSB-based OVA. In this paper, the influence of the unwanted-sideband induced measurement errors and techniques for implementing high-accurate OSSB-based OVAs are discussed.
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
SVM classifier on chip for melanoma detection.
Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak
2017-07-01
Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.
Highly accurate photogrammetric measurements of the Planck reflectors
NASA Astrophysics Data System (ADS)
Amiri Parian, Jafar; Gruen, Armin; Cozzani, Alessandro
2017-11-01
The Planck mission of the European Space Agency (ESA) is designed to image the anisotropies of the Cosmic Background Radiation Field over the whole sky. To achieve this aim, sophisticated reflectors are used as part of the Planck telescope receiving system. The system consists of secondary and primary reflectors which are sections of two different ellipsoids of revolution with mean diameters of 1 and 1.6 meters. Deformations of the reflectors which influence the optical parameters and the gain of receiving signals are investigated in vacuum and at very low temperatures. For this investigation, among the various high accuracy measurement techniques, photogrammetry was selected. With respect to the photogrammetric measurements, special considerations had to be taken into account in design steps, measurement arrangement and data processing to achieve very high accuracies. The determinability of additional parameters of the camera under the given network configuration, datum definition, reliability and precision issues as well as workspace limits and propagating errors from different sources are considered. We have designed an optimal photogrammetric network by heuristic simulation for the flight model of the primary and the secondary reflectors with relative precisions better than 1:1000'000 and 1:400'000 to achieve the requested accuracies. A least squares best fit ellipsoid method was developed to determine the optical parameters of the reflectors. In this paper we will report about the procedures, the network design and the results of real measurements.
NASA Astrophysics Data System (ADS)
Weisz, Elisabeth; Smith, William L.; Smith, Nadia
2013-06-01
The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
High accuracy position response calibration method for a micro-channel plate ion detector
NASA Astrophysics Data System (ADS)
Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.
2016-11-01
We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2015-01-01
To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2015-01-01
To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized. PMID:25763384
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
Guinan, Taryn M; Gustafsson, Ove J R; McPhee, Gordon; Kobus, Hilton; Voelcker, Nicolas H
2015-11-17
Nanostructure imaging mass spectrometry (NIMS) using porous silicon (pSi) is a key technique for molecular imaging of exogenous and endogenous low molecular weight compounds from fingerprints. However, high-mass-accuracy NIMS can be difficult to achieve as time-of-flight (ToF) mass analyzers, which dominate the field, cannot sufficiently compensate for shifts in measured m/z values. Here, we show internal recalibration using a thin layer of silver (Ag) sputter-coated onto functionalized pSi substrates. NIMS peaks for several previously reported fingerprint components were selected and mass accuracy was compared to theoretical values. Mass accuracy was improved by more than an order of magnitude in several cases. This straightforward method should form part of the standard guidelines for NIMS studies for spatial characterization of small molecules.
Precise Orbit Determination for ALOS
NASA Technical Reports Server (NTRS)
Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji
2007-01-01
The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.
Figure correction of a metallic ellipsoidal neutron focusing mirror
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya
2015-06-15
An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
Development of CFRP mirrors for space telescopes
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2013-09-01
CFRP (Caron fiber reinforced plastics) have superior properties of high specific elasticity and low thermal expansion for satellite telescope structures. However, difficulties to achieve required surface accuracy and to ensure stability in orbit have discouraged CFRP application as main mirrors. We have developed ultra-light weight and high precision CFRP mirrors of sandwich structures composed of CFRP skins and CFRP cores using a replica technique. Shape accuracy of the demonstrated mirrors of 150 mm in diameter was 0.8 μm RMS (Root Mean Square) and surface roughness was 5 nm RMS as fabricated. Further optimization of fabrication process conditions to improve surface accuracy was studied using flat sandwich panels. Then surface accuracy of the flat CFRP sandwich panels of 150 mm square was improved to flatness of 0.2 μm RMS with surface roughness of 6 nm RMS. The surface accuracy vs. size of trial models indicated high possibility of fabrication of over 1m size mirrors with surface accuracy of 1μm. Feasibility of CFRP mirrors for low temperature applications was examined for JASMINE project as an example. Stability of surface accuracy of CFRP mirrors against temperature and moisture was discussed.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
Are friends electric?: A review of the electric handpiece in clinical dental practice.
Campbell, Stuart C
2013-04-01
Contemporary restorative procedures demand precise detail in tooth preparation to achieve optimal results. Inadequate tooth preparation is a frequent cause of failure. This review considers the electric high-speed, high-torque handpiece and how it may assist clinicians in achieving greater accuracy in tooth preparation. The electric handpiece provides a satisfactory alternative to the air-turbine and may be considered by clinicians who wish greater control with operative procedures.
A novel redundant INS based on triple rotary inertial measurement units
NASA Astrophysics Data System (ADS)
Chen, Gang; Li, Kui; Wang, Wei; Li, Peng
2016-10-01
Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Spectrally interleaved, comb-mode-resolved spectroscopy using swept dual terahertz combs
Hsieh, Yi-Da; Iyonaga, Yuki; Sakaguchi, Yoshiyuki; Yokoyama, Shuko; Inaba, Hajime; Minoshima, Kaoru; Hindle, Francis; Araki, Tsutomu; Yasui, Takeshi
2014-01-01
Optical frequency combs are innovative tools for broadband spectroscopy because a series of comb modes can serve as frequency markers that are traceable to a microwave frequency standard. However, a mode distribution that is too discrete limits the spectral sampling interval to the mode frequency spacing even though individual mode linewidth is sufficiently narrow. Here, using a combination of a spectral interleaving and dual-comb spectroscopy in the terahertz (THz) region, we achieved a spectral sampling interval equal to the mode linewidth rather than the mode spacing. The spectrally interleaved THz comb was realized by sweeping the laser repetition frequency and interleaving additional frequency marks. In low-pressure gas spectroscopy, we achieved an improved spectral sampling density of 2.5 MHz and enhanced spectral accuracy of 8.39 × 10−7 in the THz region. The proposed method is a powerful tool for simultaneously achieving high resolution, high accuracy, and broad spectral coverage in THz spectroscopy. PMID:24448604
PHASE QUANTIZATION STUDY OF SPATIAL LIGHT MODULATOR FOR EXTREME HIGH-CONTRAST IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Jiangpei; Ren, Deqing, E-mail: jpdou@niaot.ac.cn, E-mail: jiangpeidou@gmail.com
2016-11-20
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimizationmore » algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10{sup -10}. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10{sup -10} in comparison to that by using a deformable mirror.« less
Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing
2016-11-01
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.
ERIC Educational Resources Information Center
Jeffery, Daniel; Yankulov, Krassimir; Crerar, Alison; Ritchie, Kerry
2016-01-01
The psychometric measures of accuracy, reliability and validity of peer assessment are critical qualities for its use as a supplement to instructor grading. In this study, we seek to determine which factors related to peer review are the most influential on these psychometric measures, with a primary focus on the accuracy of peer assessment or how…
Wallace, Jonathan; Wang, Martha O; Thompson, Paul; Busso, Mallory; Belle, Vaijayantee; Mammoser, Nicole; Kim, Kyobum; Fisher, John P; Siblani, Ali; Xu, Yueshuo; Welter, Jean F; Lennon, Donald P; Sun, Jiayang; Caplan, Arnold I; Dean, David
2014-03-01
This study tested the accuracy of tissue engineering scaffold rendering via the continuous digital light processing (cDLP) light-based additive manufacturing technology. High accuracy (i.e., <50 µm) allows the designed performance of features relevant to three scale spaces: cell-scaffold, scaffold-tissue, and tissue-organ interactions. The biodegradable polymer poly (propylene fumarate) was used to render highly accurate scaffolds through the use of a dye-initiator package, TiO2 and bis (2,4,6-trimethylbenzoyl)phenylphosphine oxide. This dye-initiator package facilitates high accuracy in the Z dimension. Linear, round, and right-angle features were measured to gauge accuracy. Most features showed accuracies between 5.4-15% of the design. However, one feature, an 800 µm diameter circular pore, exhibited a 35.7% average reduction of patency. Light scattered in the x, y directions by the dye may have reduced this feature's accuracy. Our new fine-grained understanding of accuracy could be used to make further improvements by including corrections in the scaffold design software. Successful cell attachment occurred with both canine and human mesenchymal stem cells (MSCs). Highly accurate cDLP scaffold rendering is critical to the design of scaffolds that both guide bone regeneration and that fully resorb. Scaffold resorption must occur for regenerated bone to be remodeled and, thereby, achieve optimal strength.
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Micro-assembly of three-dimensional rotary MEMS mirrors
NASA Astrophysics Data System (ADS)
Wang, Lidai; Mills, James K.; Cleghorn, William L.
2009-02-01
We present a novel approach to construct three-dimensional rotary micro-mirrors, which are fundamental components to build 1×N or N×M optical switching systems. A rotary micro-mirror consists of two microparts: a rotary micro-motor and a micro-mirror. Both of the two microparts are fabricated with PolyMUMPs, a surface micromachining process. A sequential robotic microassembly process is developed to join the two microparts together to construct a threedimensional device. In order to achieve high positioning accuracy and a strong mechanical connection, the micro-mirror is joined to the micro-motor using an adhesive mechanical fastener. The mechanical fastener has self-alignment ability and provides a temporary joint between the two microparts. The adhesive bonding can create a strong permanent connection, which does not require extra supporting plates for the micro-mirror. A hybrid manipulation strategy, which includes pick-and-place and pushing-based manipulations, is utilized to manipulation the micro-mirror. The pick-andplace manipulation has the ability to globally position the micro-mirror in six degrees of freedom. The pushing-based manipulation can achieve high positioning accuracy. This microassembly approach has great flexibility and high accuracy; furthermore, it does not require extra supporting plates, which greatly simplifies the assembly process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Run; Su, Peng; Burge, James H.
The Software Configurable Optical Test System (SCOTS) uses deflectometry to measure surface slopes of general optical shapes without the need for additional null optics. Careful alignment of test geometry and calibration of inherent system error improve the accuracy of SCOTS to a level where it competes with interferometry. We report a SCOTS surface measurement of an off-axis superpolished elliptical x-ray mirror that achieves <1 nm<1 nm root-mean-square accuracy for the surface measurement with low-order term included.
On what it means to know someone: a matter of pragmatics.
Gill, Michael J; Swann, William B
2004-03-01
Two studies provide support for W. B. Swann's (1984) argument that perceivers achieve substantial pragmatic accuracy--accuracy that facilitates the achievement of relationship-specific interaction goals--in their social relationships. Study 1 assessed the extent to which group members reached consensus regarding the behavior of a member in familiar (as compared with unfamiliar) contexts and found that groups do indeed achieve this form of pragmatic accuracy. Study 2 assessed the degree of insight romantic partners had into the self-views of their partners on relationship-relevant (as compared with less relevant) traits and found that couples do indeed achieve this form of pragmatic accuracy. Furthermore, pragmatic accuracy was uniquely associated with relationship harmony. Implications for a functional approach to person perception are discussed.
Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping
NASA Astrophysics Data System (ADS)
Mostafa, M. M. R.
2017-08-01
Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
Analog Correlator Based on One Bit Digital Correlator
NASA Technical Reports Server (NTRS)
Prokop, Norman (Inventor); Krasowski, Michael (Inventor)
2017-01-01
A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.
A fuzzy pattern matching method based on graph kernel for lithography hotspot detection
NASA Astrophysics Data System (ADS)
Nitta, Izumi; Kanazawa, Yuzi; Ishida, Tsutomu; Banno, Koji
2017-03-01
In advanced technology nodes, lithography hotspot detection has become one of the most significant issues in design for manufacturability. Recently, machine learning based lithography hotspot detection has been widely investigated, but it has trade-off between detection accuracy and false alarm. To apply machine learning based technique to the physical verification phase, designers require minimizing undetected hotspots to avoid yield degradation. They also need a ranking of similar known patterns with a detected hotspot to prioritize layout pattern to be corrected. To achieve high detection accuracy and to prioritize detected hotspots, we propose a novel lithography hotspot detection method using Delaunay triangulation and graph kernel based machine learning. Delaunay triangulation extracts features of hotspot patterns where polygons locate irregularly and closely one another, and graph kernel expresses inner structure of graphs. Additionally, our method provides similarity between two patterns and creates a list of similar training patterns with a detected hotspot. Experiments results on ICCAD 2012 benchmarks show that our method achieves high accuracy with allowable range of false alarm. We also show the ranking of the similar known patterns with a detected hotspot.
Statistical algorithms improve accuracy of gene fusion detection
Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro
2017-01-01
Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Multi-Autonomous Ground-robotic International Challenge (MAGIC) 2010
2010-12-14
SLAM technique since this setup, having a LIDAR with long-range high-accuracy measurement capability, allows accurate localization and mapping more...achieve the accuracy of 25cm due to the use of multi-dimensional information. OGM is, similarly to SLAM , carried out by using LIDAR data. The OGM...a result of the development and implementation of the hybrid feature-based/scan-matching Simultaneous Localization and Mapping ( SLAM ) technique, the
Photon caliper to achieve submillimeter positioning accuracy
NASA Astrophysics Data System (ADS)
Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan
2017-09-01
The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.
Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-03-01
Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.
Overlay accuracy on a flexible web with a roll printing process based on a roll-to-roll system.
Chang, Jaehyuk; Lee, Sunggun; Lee, Ki Beom; Lee, Seungjun; Cho, Young Tae; Seo, Jungwoo; Lee, Sukwon; Jo, Gugrae; Lee, Ki-yong; Kong, Hyang-Shik; Kwon, Sin
2015-05-01
For high-quality flexible devices from printing processes based on Roll-to-Roll (R2R) systems, overlay alignment during the patterning of each functional layer poses a major challenge. The reason is because flexible substrates have a relatively low stiffness compared with rigid substrates, and they are easily deformed during web handling in the R2R system. To achieve a high overlay accuracy for a flexible substrate, it is important not only to develop web handling modules (such as web guiding, tension control, winding, and unwinding) and a precise printing tool but also to control the synchronization of each unit in the total system. A R2R web handling system and reverse offset printing process were developed in this work, and an overlay between the 1st and 2nd layers of ±5μm on a 500 mm-wide film was achieved at a σ level of 2.4 and 2.8 (x and y directions, respectively) in a continuous R2R printing process. This paper presents the components and mechanisms used in reverse offset printing based on a R2R system and the printing results including positioning accuracy and overlay alignment accuracy.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis
Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-01-01
Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122
NASA Astrophysics Data System (ADS)
Blaser, S.; Nebiker, S.; Cavegn, S.
2017-05-01
Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.
Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa
2018-06-05
Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.
Parkinson's disease detection based on dysphonia measurements
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.
Detection of eardrum abnormalities using ensemble deep learning approaches
NASA Astrophysics Data System (ADS)
Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.
2018-02-01
In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).
Multi-look fusion identification: a paradigm shift from quality to quantity in data samples
NASA Astrophysics Data System (ADS)
Wong, S.
2009-05-01
A multi-look identification method known as score-level fusion is found to be capable of achieving very high identification accuracy, even when low quality target signatures are used. Analysis using measured ground vehicle radar signatures has shown that a 97% correct identification rate can be achieved using this multi-look fusion method; in contrast, only a 37% accuracy rate is obtained when single target signature input is used. The results suggest that quantity can be used to replace quality of the target data in improving identification accuracy. With the advent of sensor technology, a large amount of target signatures of marginal quality can be captured routinely. This quantity over quality approach allows maximum exploitation of the available data to improve the target identification performance and this could have the potential of being developed into a disruptive technology.
A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.
2016-01-01
The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…
Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.
Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G
2012-11-13
The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.
Improving IMES Localization Accuracy by Integrating Dead Reckoning Information
Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki
2016-01-01
Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492
On-chip magnetically actuated robot with ultrasonic vibration for single cell manipulations.
Hagiwara, Masaya; Kawahara, Tomohiro; Yamanishi, Yoko; Masuda, Taisuke; Feng, Lin; Arai, Fumihito
2011-06-21
This paper presents an innovative driving method for an on-chip robot actuated by permanent magnets in a microfluidic chip. A piezoelectric ceramic is applied to induce ultrasonic vibration to the microfluidic chip and the high-frequency vibration reduces the effective friction on the MMT significantly. As a result, we achieved 1.1 micrometre positioning accuracy of the microrobot, which is 100 times higher accuracy than without vibration. The response speed is also improved and the microrobot can be actuated with a speed of 5.5 mm s(-1) in 3 degrees of freedom. The novelty of the ultrasonic vibration appears in the output force as well. Contrary to the reduction of friction on the microrobot, the output force increased twice as much by the ultrasonic vibration. Using this high accuracy, high speed, and high power microrobot, swine oocyte manipulations are presented in a microfluidic chip.
Relative Navigation Strategies for the Magnetopheric Multiscale Mission
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, Russell; Lee, Taesul; Long, Anne
2004-01-01
This paper evaluates several navigation approaches for the Magnetospheric Multiscale (MMS) mission, which consists of a tetrahedral formation of satellites flying in highly eccentric Earth orbits. For this investigation, inter-satellite separations of approximately 10 kilometers near apogee are used for the first two phases of the MMS mission. Navigation approaches were studied using ground station two-way Doppler measurements, Global Positioning System (GPS) pseudorange measurements, and cross-link range measurements between the members of the formation. An absolute position accuracy of 15 kilometers or better can be achieved with most of the approaches studied, and a relative position accuracy of 100 meters or better can be achieved at apogee in several cases.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
NASA Astrophysics Data System (ADS)
Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore
2017-10-01
This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.
[Study on high accuracy detection of multi-component gas in oil-immerse power transformer].
Fan, Jie; Chen, Xiao; Huang, Qi-Feng; Zhou, Yu; Chen, Gang
2013-12-01
In order to solve the problem of low accuracy and mutual interference in multi-component gas detection, a kind of multi-component gas detection network with high accuracy was designed. A semiconductor laser with narrow bandwidth was utilized as light source and a novel long-path gas cell was also used in this system. By taking the single sine signal to modulate the spectrum of laser and using space division multiplexing (SDM) and time division multiplexing (TDM) technique, the detection of multi-component gas was achieved. The experiments indicate that the linearity relevance coefficient is 0. 99 and the measurement relative error is less than 4%. The system dynamic response time is less than 15 s, by filling a volume of multi-component gas into the gas cell gradually. The system has advantages of high accuracy and quick response, which can be used in the fault gas on-line monitoring for power transformers in real time.
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
High accuracy in short ISS missions
NASA Astrophysics Data System (ADS)
Rüeger, J. M.
1986-06-01
Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Preliminary results from the portable standard satellite laser ranging intercomparison with MOBLAS-7
NASA Technical Reports Server (NTRS)
Selden, Michael; Varghese, Thomas K.; Heinick, Michael; Oldham, Thomas
1993-01-01
Conventional Satellite Laser Ranging (SLR) instrumentation has been configured and successfully used to provide high-accuracy laboratory measurements on the LAGEOS-2 and TOPEX cube-corner arrays. The instrumentation, referred to as the Portable Standard, has also been used for field measurements of satellite ranges in tandem with MOBLAS-7. Preliminary results of the SLR measurements suggest that improved range accuracy can be achieved using this system. Results are discussed.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Tense Marking in the English Narrative Retells of Dual Language Preschoolers.
Gusewski, Svenja; Rojas, Raúl
2017-07-26
This longitudinal study investigated the emergence of English tense marking in young (Spanish-English) dual language learners (DLLs) over 4 consecutive academic semesters, addressing the need for longitudinal data on typical acquisition trajectories of English in DLL preschoolers. Language sample analysis was conducted on 139 English narrative retells elicited from 39 preschool-age (Spanish-English) DLLs (range = 39-65 months). Growth curve models captured within- and between-individual change in tense-marking accuracy over time. Tense-marking accuracy was indexed by the finite verb morphology composite and by 2 specifically developed adaptations. Individual tense markers were systematically described in terms of overall accuracy and specific error patterns. Tense-marking accuracy exhibited significant growth over time for each composite. Initially, irregular past-tense accuracy was higher than regular past-tense accuracy; over time, however, regular past-tense marking outpaced accuracy on irregular verbs. These findings suggest that young DLLs can achieve high tense-marking accuracy assuming 2 years of immersive exposure to English. Monitoring the growth in tense-marking accuracy over time and considering productive tense-marking errors as partially correct more precisely captured the emergence of English tense marking in this population with highly variable expressive language skills. https://doi.org/10.23641/asha.5176942.
Supervised segmentation of microelectrode recording artifacts using power spectral density.
Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert
2015-08-01
Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.
Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm
2018-05-16
This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.
An Improved BLE Indoor Localization with Kalman-Based Fusion: An Experimental Study
Röbesaat, Jenny; Zhang, Peilin; Abdelaal, Mohamed; Theel, Oliver
2017-01-01
Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter. PMID:28445421
Niki, Yasuo; Takeda, Yuki; Harato, Kengo; Suda, Yasunori
2015-11-01
Achievement of very deep knee flexion after total knee arthroplasty (TKA) can play a critical role in the satisfaction of patients who demand a floor-sitting lifestyle and engage in high-flexion daily activities (e.g., seiza-sitting). Seiza-sitting is characterized by the knees flexed >145º and feet turned sole upwards underneath the buttocks with the tibia internally rotated. The present study investigated factors affecting the achievement of seiza-sitting after TKA using posterior-stabilized total knee prosthesis with high-flex knee design. Subjects comprised 32 patients who underwent TKA with high-flex knee prosthesis and achieved seiza-sitting (knee flexion >145º) postoperatively. Another 32 patients served as controls who were capable of knee flexion >145º preoperatively, but failed to achieve seiza-sitting postoperatively. Accuracy of femoral and tibial component positions was assessed in terms of deviation from the ideal position using a two-dimensional to three-dimensional matching technique. Accuracies of the component position, posterior condylar offset ratio and intraoperative gap length were compared between the two groups. The proportion of patients with >3º internally rotated tibial component was significantly higher in patients who failed at seiza-sitting (41 %) than among patients who achieved it (13 %, p = 0.021). Comparison of intraoperative gap length between patient groups revealed that gap length at 135º flexion was significantly larger in patients who achieved seiza-sitting (4.2 ± 0.4 mm) than in patients who failed at it (2.7 ± 0.4 mm, p = 0.007). Conversely, no significant differences in gap inclination were seen between the groups. From the perspective of surgical factors, accurate implant positioning, particularly rotational alignment of the tibial component, and maintenance of a sufficient joint gap at 135º flexion appear to represent critical factors for achieving >145º of deep knee flexion after TKA.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian
2017-01-01
The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A kinematic/kinetic hybrid airplane simulator model : draft.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
A kinematic/kinetic hybrid airplane simulator model.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
Very high resolution aerial films
NASA Astrophysics Data System (ADS)
Becker, Rolf
1986-11-01
The use of very high resolution aerial films in aerial photography is evaluated. Commonly used panchromatic, color, and CIR films and their high resolution equivalents are compared. Based on practical experience and systematic investigations, the very high image quality and improved height accuracy that can be achieved using these films are demonstrated. Advantages to be gained from this improvement and operational restrictions encountered when using high resolution film are discussed.
"Application of Tunable Diode Laser Spectrometry to Isotopic Studies for Exobiology"
NASA Technical Reports Server (NTRS)
Sauke, Todd B.
1999-01-01
Computer-controlled electrically-activated valves for rapid gas-handling have been incorporated into the Stable Isotope Laser Spectrometer (SILS) which now permits rapid filling and evacuating of the sample and reference gas cells, Experimental protocols have been developed to take advantage of the fast gas handling capabilities of the instrument and to achieve increased accuracy which results from reduced instrumental drift during rapid isotopic ratio measurements. Using these protocols' accuracies of 0.5 del (0.05%) have been achieved in measurements of 13C/12C in carbon dioxide. Using the small stable isotope laser spectrometer developed in a related PIDDP project of the Co-I, protocols for acquisition of rapid sequential calibration spectra were developed which resulted in 0.5 del accuracy also being achieved in this less complex instrument. An initial version of software for automatic characterization of tunable diode lasers has been developed and diodes have been characterized in order to establish their spectral output properties. A new state-of-the-art high operating temperature (200 K) mid infrared diode laser was purchased (through NASA procurement) and characterized. A thermo-electrically cooled mid infrared tunable diode laser system for use with high temperature operation lasers was developed. In addition to isotopic ratio measurements of carbon and oxygen, measurements of a third biologically important element (15N/14N in N2O gas) have been achieved to a preliminary accuracy of about 0.2%. Transfer of the basic SILS technology to the commercial sector is proceeding under an unfunded Space Act Agreement between NASA and SpiraMed, a medical diagnostic instrument company. Two patents have been issued. Foreign patents based on these two US patents have been applied for and are expected to be issued. A preliminary design was developed for a thermo-electrically cooled SILS instruments for application to planetary space flight exploration missions.
Design and Development of the Terrain Information Extraction System
1990-09-04
system successfully demonstrated relief measurement and orthophoto production, automated feature extraction has remained "the major problem of today’s...the hierarchical relaxation correlation method developed by Helava Associates, Inc. and digital orthophoto production. To achieve this high accuracy...image memory transfer rates will be achieved by using data blocks or "image tiles ." Further, an image fringe loading module will be implemented which
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
NASA Astrophysics Data System (ADS)
Hasözbek, Altug; Mathew, Kattathu; Wegener, Michael
2013-04-01
The total evaporation (TE) is a well-established analytical method for safeguards measurement of uranium and plutonium isotope-amount ratios using the thermal ionization mass spectrometry (TIMS). High accuracy and precision isotopic measurements find many applications in nuclear safeguards, for e.g. assay measurements using isotope dilution mass spectrometry. To achieve high accuracy and precision in TIMS measurements, mass dependent fractionation effects are minimized by either the measurement technique or changes in the hardware components that are used to control sample heating and evaporation process. At NBL, direct total evaporation (DTE) method on the modified MAT261 instrument, uses the data system to read the ion signal intensity and its difference from a pre-determined target intensity, is used to control the incremental step at which the evaporation filament is heated. The feedback and control is achieved by proprietary hardware from SPECTROMAT that uses an analog regulator in the filament power supply with direct feedback of the detector intensity. Compared to traditional TE method on this instrument, DTE provides better precision (relative standard deviation, expressed as a percent) and accuracy (relative difference, expressed as a percent) of 0.05 to 0.08 % for low enriched and high enriched NBL uranium certified reference materials.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
Fully Integrated, Miniature, High-Frequency Flow Probe Utilizing MEMS Leadless SOI Technology
NASA Technical Reports Server (NTRS)
Ned, Alex; Kurtz, Anthony; Shang, Tonghuo; Goodman, Scott; Giemette. Gera (d)
2013-01-01
This work focused on developing, fabricating, and fully calibrating a flowangle probe for aeronautics research by utilizing the latest microelectromechanical systems (MEMS), leadless silicon on insulator (SOI) sensor technology. While the concept of angle probes is not new, traditional devices had been relatively large due to fabrication constraints; often too large to resolve flow structures necessary for modern aeropropulsion measurements such as inlet flow distortions and vortices, secondary flows, etc. Mea surements of this kind demanded a new approach to probe design to achieve sizes on the order of 0.1 in. (.3 mm) diameter or smaller, and capable of meeting demanding requirements for accuracy and ruggedness. This approach invoked the use of stateof- the-art processing techniques to install SOI sensor chips directly onto the probe body, thus eliminating redundancy in sensor packaging and probe installation that have historically forced larger probe size. This also facilitated a better thermal match between the chip and its mount, improving stability and accuracy. Further, the leadless sensor technology with which the SOI sensing element is fabricated allows direct mounting and electrical interconnecting of the sensor to the probe body. This leadless technology allowed a rugged wire-out approach that is performed at the sensor length scale, thus achieving substantial sensor size reductions. The technology is inherently capable of high-frequency and high-accuracy performance in high temperatures and harsh environments.
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.
Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R
2009-07-01
The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallstrom, Jason O.; Ni, Zheng Richard
This STTR Phase I project assessed the feasibility of a new CO 2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO 2 concentrations, as well as themore » electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO 2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO 2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO 2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States.« less
Raabe, E.A.; Stumpf, R.P.; Marth, N.J.; Shrestha, R.L.
1996-01-01
Elevation differences on the order of 10 cm within Florida's marsh system influence major variations in tidal flooding and in the associated plant communities. This low elevation gradient combined with sea level fluctuation of 5-to-10 cm over decadel and longer periods can generate significant alteration and erosion of marsh habitats along the Gulf Coast. Knowledge of precise and accurate elevations in the marsh is critical to the efficient monitoring and management of these habitats. Global positioning system (GPS) technology was employed to establish six new orthometric heights along the Gulf Coast from which kinematic surveys into the marsh interior are conducted. The vertical accuracy achieved using GPS technology was evaluated using two networks with 16 vertical and nine horizontal NGS published high accuracy positions. New positions were occupied near St. Marks National Wildlife Refuge and along the coastline of Levy County and Citrus County. Static surveys were conducted using four Ashtech dual frequency P-code receivers for 45-minute sessions and a data logging rate of 10 seconds. Network vector lengths ranged from 4 to 64 km and, including redundant baselines, totaled over 100 vectors. Analysis includes use of the GEOID93 model with a least squares network adjustment and reference to the National Geodetic Reference System (NGRS). The static surveys show high internal consistency and the desired centimeter-level accuracy is achieved for the local network. Uncertainties for the newly established vertical positions range from 0.8 cm to 1.8 cm at the 95% confidence level. These new positions provide sufficient vertical accuracy to achieve the project objectives of tying marsh surface elevations to long-term water level gauges recording sea level fluctuations along the coast.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Modelling for Prediction vs. Modelling for Understanding: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Edelsbrunner, Peter; Schneider, Michael
2013-01-01
Musso et al. (2013) predict students' academic achievement with high accuracy one year in advance from cognitive and demographic variables, using artificial neural networks (ANNs). They conclude that ANNs have high potential for theoretical and practical improvements in learning sciences. ANNs are powerful statistical modelling tools but they can…
Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.
Du, Xinxin; Tan, Kok Kiong
2016-05-01
Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.
A robust omnifont open-vocabulary Arabic OCR system using pseudo-2D-HMM
NASA Astrophysics Data System (ADS)
Rashwan, Abdullah M.; Rashwan, Mohsen A.; Abdel-Hameed, Ahmed; Abdou, Sherif; Khalil, A. H.
2012-01-01
Recognizing old documents is highly desirable since the demand for quickly searching millions of archived documents has recently increased. Using Hidden Markov Models (HMMs) has been proven to be a good solution to tackle the main problems of recognizing typewritten Arabic characters. These attempts however achieved a remarkable success for omnifont OCR under very favorable conditions, they didn't achieve the same performance in practical conditions, i.e. noisy documents. In this paper we present an omnifont, large-vocabulary Arabic OCR system using Pseudo Two Dimensional Hidden Markov Model (P2DHMM), which is a generalization of the HMM. P2DHMM offers a more efficient way to model the Arabic characters, such model offer both minimal dependency on the font size/style (omnifont), and high level of robustness against noise. The evaluation results of this system are very promising compared to a baseline HMM system and best OCRs available in the market (Sakhr and NovoDynamics). The recognition accuracy of the P2DHMM classifier is measured against the classic HMM classifier, the average word accuracy rates for P2DHMM and HMM classifiers are 79% and 66% respectively. The overall system accuracy is measured against Sakhr and NovoDynamics OCR systems, the average word accuracy rates for P2DHMM, NovoDynamics, and Sakhr are 74%, 71%, and 61% respectively.
A design of optical modulation system with pixel-level modulation accuracy
NASA Astrophysics Data System (ADS)
Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu
2018-01-01
Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.
ALARA: The next link in a chain of activation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, P.P.H.; Henderson, D.L.
1996-12-31
The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.
Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang
2009-01-01
This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.
NASA Astrophysics Data System (ADS)
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A
2017-09-01
The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy
2018-03-31
In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Relative Navigation Algorithms for Phase 1 of the MMS Formation
NASA Technical Reports Server (NTRS)
Kelbel, David; Lee, Taesul; Long, Anne; Carpenter, Russell; Gramling, Cheryl
2003-01-01
This paper evaluates several navigation approaches for the first phase of the Magnetospheric Multiscale (MMS) mission, which consists of a tetrahedral formation of four satellites in highly eccentric Earth orbits of approximately 1.2 by 12 Earth radii at an inclination of 10 degrees. The inter-satellite separation is approximately 10 kilometers near apogees. Navigation approaches were studied using ground station m g e =d two-way Doppler measurements, Global Positioning System (GPS) pseudorange measurements, crosslink range measurements among the members flying in formation, and various combinations of these measurement types. An absolute position accuracy of 10 kilometers or better can be achieved with most of the approaches studied and a relative position accuracy of 100 meters or better can be achieved at apogee in some cases. Among the various approaches studied, the approaches that use a combination of GPS and crosslink measurements were found to be more reliable in terms of absolute and relative navigation accuracies and operational flexibility.
Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures
NASA Astrophysics Data System (ADS)
Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar
2017-02-01
A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.
NASA Astrophysics Data System (ADS)
Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong
2016-05-01
The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Multiscale high-order/low-order (HOLO) algorithms and applications
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
High spatial resolution restoration of IRAS images
NASA Technical Reports Server (NTRS)
Grasdalen, Gary L.; Inguva, R.; Dyck, H. Melvin; Canterna, R.; Hackwell, John A.
1990-01-01
A general technique to improve the spatial resolution of the IRAS AO data was developed at The Aerospace Corporation using the Maximum Entropy algorithm of Skilling and Gull. The technique has been applied to a variety of fields and several individual AO MACROS. With this general technique, resolutions of 15 arcsec were achieved in 12 and 25 micron images and 30 arcsec in 60 and 100 micron images. Results on galactic plane fields show that both photometric and positional accuracy achieved in the general IRAS survey are also achieved in the reconstructed images.
Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.
Gao, Lei; Bourke, A K; Nelson, John
2014-06-01
Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high recognition accuracies on a multi-sensor system. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Investigation of Portevin-Le Chatelier effect in 5456 Al-based alloy using digital image correlation
NASA Astrophysics Data System (ADS)
Cheng, Teng; Xu, Xiaohai; Cai, Yulong; Fu, Shihua; Gao, Yue; Su, Yong; Zhang, Yong; Zhang, Qingchuan
2015-02-01
A variety of experimental methods have been proposed for Portevin-Le Chatelier (PLC) effect. They mainly focused on the in-plane deformation. In order to achieve the high-accuracy measurement, three-dimensional digital image correlation (3D-DIC) was employed in this work to investigate the PLC effect in 5456 Al-based alloy. The temporal and spatial evolutions of deformation in the full field of specimen surface were observed. The large deformation of localized necking was determined experimentally. The distributions of out-of-plane displacement over the loading procedure were also obtained. Furthermore, a comparison of measurement accuracy between two-dimensional digital image correlation (2D-DIC) and 3D-DIC was also performed. Due to the theoretical restriction, the measurement accuracy of 2D-DIC decreases with the increase of deformation. A maximum discrepancy of about 20% with 3D-DIC was observed in this work. Therefore, 3D-DIC is actually more essential for the high-accuracy investigation of PLC effect.
Ion beam figuring of silicon aspheres
NASA Astrophysics Data System (ADS)
Demmler, Marcel; Zeuner, Michael; Luca, Alfonz; Dunger, Thoralf; Rost, Dirk; Kiontke, Sven; Krüger, Marcus
2011-03-01
Silicon lenses are widely used for infrared applications. Especially for portable devices the size and weight of the optical system are very important factors. The use of aspherical silicon lenses instead of spherical silicon lenses results in a significant reduction of weight and size. The manufacture of silicon lenses is more challenging than the manufacture of standard glass lenses. Typically conventional methods like diamond turning, grinding and polishing are used. However, due to the high hardness of silicon, diamond turning is very difficult and requires a lot of experience. To achieve surfaces of a high quality a polishing step is mandatory within the manufacturing process. Nevertheless, the required surface form accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. To overcome these disadvantages a method called Ion Beam Figuring can be used to manufacture silicon lenses with high surface form accuracies. The general advantage of the Ion Beam Figuring technology is a contactless polishing process without any aging effects of the tool. Due to this an excellent stability of the removal rate without any mechanical surface damage is achieved. The related physical process - called sputtering - can be applied to any material and is therefore also applicable to materials of high hardness like Silicon (SiC, WC). The process is realized through the commercially available ion beam figuring system IonScan 3D. During the process, the substrate is moved in front of a focused broad ion beam. The local milling rate is controlled via a modulated velocity profile, which is calculated specifically for each surface topology in order to mill the material at the associated positions to the target geometry. The authors will present aspherical silicon lenses with very high surface form accuracies compared to conventionally manufactured lenses.
Zou, Weiwen; He, Zuyuan; Hotate, Kazuo
2011-01-31
This paper presents a novel scheme to generate and detect Brillouin dynamic grating in a polarization-maintaining optical fiber based on one laser source. Precise measurement of Brillouin dynamic grating spectrum is achieved benefiting from that the pump, probe and readout waves are coherently originated from the same laser source. Distributed discrimination of strain and temperature is also achieved with high accuracy.
NASA Astrophysics Data System (ADS)
Peng, F.; Cai, X.; Tan, W.
2017-09-01
Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
Measuring the arterial-induced skin vibration by geometrical moiré fringe
NASA Astrophysics Data System (ADS)
Chiu, Shih-Yung; Wang, Chun-Hsiung; Lee, Shu-Sheng; Wu, Wen-Jong; Hsu, Yu-Hsiang; Lee, Chih-Kung
2018-02-01
The demand for self-measured blood pressure self-monitoring device has much increased due to cardiovascular diseases have become leading causes of death for aging population. Currently, the primary non-invasive blood pressure monitoring method is cuff-based. It is well developed and accurate. However, the measuring process is not comfortable, and it cannot provide a continuous measurement. To overcome this problem, methods such as tonometry, volume clamp method, photoplethysmography, pulse wave velocity, and pulse transit time are reported. However, the limited accuracy hindered its application for diagnostics. To perform sequential blood pressure measurement with a high accuracy and long-term examination, we apply moiré interferometry to measure wrist skin vibration induced by radial artery. To achieve this goal, we developed a miniaturized device that can perform moiré interferometry around the wrist region. The 0.4-mm-pitched binary grating and tattoo sticker with 0.46 mm-pitched stripe pattern are used to perform geometric moiré. We demonstrated that the sensitivity and accuracy of this integrated system were sufficient to monitor arterialinduced skin vibration non-invasively. Our developed system was validated with ECG signals collected by a commercial system. According to our studies from measurement, the repeatability of wrist pulsation measurement was achieved with an accuracy of 99.1% in heart rate. A good repeatability of wrist pulse measurement was achieved. Simulations and experiments are both conducted in this paper and prove of geometrical moiré method a suitable technique for arterial-induced skin vibration monitoring.
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.; Walker, R. E.; Gokhman, B.
1985-01-01
Performance requirements regarding geometric accuracy have been defined in terms of end product goals, but until recently no precise details have been given concerning the conditions under which that accuracy is to be achieved. In order to achieve higher spatial and spectral resolutions, the Thematic Mapper (TM) sensor was designed to image in both forward and reverse mirror sweeps in two separate focal planes. Both hardware and software have been augmented and changed during the course of the Landsat TM developments to achieve improved geometric accuracy. An investigation has been conducted to determine if the TM meets the National Map Accuracy Standards for geometric accuracy at larger scales. It was found that TM imagery, in terms of geometry, has come close to, and in some cases exceeded, its stringent specifications.
ERIC Educational Resources Information Center
Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.
2014-01-01
Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…
Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias
2016-03-01
Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Matías-Guiu, Jordi A; Valles-Salgado, María; Rognoni, Teresa; Hamre-Gil, Frank; Moreno-Ramos, Teresa; Matías-Guiu, Jorge
2017-01-01
Our aim was to evaluate and compare the diagnostic properties of 5 screening tests for the diagnosis of mild Alzheimer disease (AD). We conducted a prospective and cross-sectional study of 92 patients with mild AD and of 68 healthy controls from our Department of Neurology. The diagnostic properties of the following tests were compared: Mini-Mental State Examination (MMSE), Addenbrooke's Cognitive Examination III (ACE-III), Memory Impairment Screen (MIS), Montreal Cognitive Assessment (MoCA), and Rowland Universal Dementia Assessment Scale (RUDAS). All tests yielded high diagnostic accuracy, with the ACE-III achieving the best diagnostic properties. The area under the curve was 0.897 for the ACE-III, 0.889 for the RUDAS, 0.874 for the MMSE, 0.866 for the MIS, and 0.856 for the MoCA. The Mini-ACE score from the ACE-III showed the highest diagnostic capacity (area under the curve 0.939). Memory scores of the ACE-III and of the RUDAS showed a better diagnostic accuracy than those of the MMSE and of the MoCA. All tests, especially the ACE-III, conveyed a higher diagnostic accuracy in patients with full primary education than in the less educated group. Implementing normative data improved the diagnostic accuracy of the ACE-III but not that of the other tests. The ACE-III achieved the highest diagnostic accuracy. This better discrimination was more evident in the more educated group. © 2017 S. Karger AG, Basel.
Global Lunar Topography from the Deep Space Gateway for Science and Exploration
NASA Astrophysics Data System (ADS)
Archinal, B.; Gaddis, L.; Kirk, R.; Edmundson, K.; Stone, T.; Portree, D.; Keszthelyi, L.
2018-02-01
The Deep Space Gateway, in low lunar orbit, could be used to achieve a long standing goal of lunar science, collecting stereo images in two months to make a complete, uniform, high resolution, known accuracy, global topographic model of the Moon.
Aircraft Update Programmes. The Economical Alternative
2000-04-01
will drive the desired level of integration but cost will determine the achieved level. Paper #15 by Christian Dedieu-Eric Loffler ( SAGEM SA) presented...requirements. The SAGEM SA upgrade concept allows one to match specifications ranging from basics performance enhancement, such as high accuracy navigation for
NASA Astrophysics Data System (ADS)
Chang, Yu Min; Lu, Nien Hua; Wu, Tsung Chiang
2005-06-01
This study applies 3D Laser scanning technology to develop a high-precision measuring system for digital survey of historical building. It outperformed other methods in obtaining abundant high-precision measuring points and computing data instantly. In this study, the Pei-tien Temple, a Chinese Taoism temple in southern Taiwan famous for its highly intricate architecture and more than 300-year history, was adopted as the target to proof the high accuracy and efficiency of this system. By using French made MENSI GS-100 Laser Scanner, numerous measuring points were precisely plotted to present the plane map, vertical map and 3D map of the property. Accuracies of 0.1-1 mm in the digital data have consistently been achieved for the historical heritage measurement.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm 2 . The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm2. The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control. PMID:28442997
A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows
Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...
2015-03-11
High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less
A Brief Description of the Kokkos implementation of the SNAP potential in ExaMiniMD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan P.; Trott, Christian Robert
2017-11-01
Within the EXAALT project, the SNAP [1] approach is being used to develop high accuracy potentials for use in large-scale long-time molecular dynamics simulations of materials behavior. In particular, we have developed a new SNAP potential that is suitable for describing the interplay between helium atoms and vacancies in high-temperature tungsten[2]. This model is now being used to study plasma-surface interactions in nuclear fusion reactors for energy production. The high-accuracy of SNAP potentials comes at the price of increased computational cost per atom and increased computational complexity. The increased cost is mitigated by improvements in strong scaling that can bemore » achieved using advanced algorithms [3].« less
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
2013-07-01
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.
Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil
2005-03-01
We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.
Lee, Jong Hoon; Jang, Hong Seok; Kim, Jun-Gi; Lee, Myung Ah; Kim, Dae Yong; Kim, Tae Hyun; Oh, Jae Hwan; Park, Sung Chan; Kim, Sun Young; Baek, Ji Yeon; Park, Hee Chul; Kim, Hee Cheol; Nam, Taek-Keun; Chie, Eui Kyu; Jung, Ji-Han; Oh, Seong Taek
2014-10-01
The reported overall accuracy of MRI in predicting the pathologic stage of nonirradiated rectal cancer is high. However, the role of MRI in restaging rectal tumors after neoadjuvant CRT is contentious. Thus, we evaluate the accuracy of restaging magnetic resonance imaging (MRI) for rectal cancer patients who receive preoperative chemoradiotherapy (CRT). We analyzed 150 patients with locally advanced rectal cancer (T3-4N0-2) who had received preoperative CRT. Pre-CRT MRI was performed for local tumor and nodal staging. All patients underwent restaging MRI followed by total mesorectal excision after the end of radiotherapy. The primary endpoint of the present study was to estimate the accuracy of post-CRT MRI as compared with pathologic staging. Pathologic T classification matched the post-CRT MRI findings in 97 (64.7%) of 150 patients. 36 (24.0%) of 150 patients were overstaged in T classification, and the concordance degree was moderate (k=0.33, p<0.01). Pathologic N classification matched the post-CRI MRI findings in 85 (56.6%) of 150 patients. 54 (36.0%) of 150 patients were overstaged in N classification. 26 patients achieved downstaging (ycT0-2N0) on restaging MRI after CRT. 23 (88.5%) of 26 patients who had been downstaged on MRI after CRT were confirmed on the pathological staging, and the concordance degree was good (k=0.72, p<0.01). Restaging MRI has low accuracy for the prediction of the pathologic T and N classifications in rectal cancer patients who received preoperative CRT. The diagnostic accuracy of restaging MRI is relatively high in rectal cancer patients who achieved clinical downstaging after CRT. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Marsh, Herbert W.; And Others
Particularly in the 1960s and 1970s it was frequently argued that coeducational (Coed) high schools provided a more natural social environment to prepare adolescents for adulthood than did single sex (SS) schools. Based on the assumed accuracy of this belief, SS schools are becoming infrequent or even nonexistent in most western societies. This…
High-precision GNSS ocean positioning with BeiDou short-message communication
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhiteng; Zang, Nan; Wang, Siyao
2018-04-01
The current popular GNSS RTK technique would be not applicable on ocean due to the limited communication access for transmitting differential corrections. A new technique is proposed for high-precision ocean RTK, referred to as ORTK, where the corrections are transmitted by employing the function of BeiDou satellite short-message communication (SMC). To overcome the limitation of narrow bandwidth of BeiDou SMC, a new strategy of simplifying and encoding corrections is proposed instead of standard differential corrections, which reduces the single-epoch corrections from more than 1000 to less than 300 bytes. To solve the problems of correction delays, cycle slips, blunders and abnormal epochs over ultra-long baseline ORTK, a series of powerful algorithms were designed at the user-end software for achieving the stable and precise kinematic solutions on far ocean applications. The results from two long baselines of 240 and 420 km and real ocean experiments reveal that the kinematic solutions with horizontal accuracy of 5 cm and vertical accuracy of better than 15 cm are achievable by convergence time of 3-10 min. Compared to commercial ocean PPP with satellite telecommunication, ORTK is of much cheaper expense, higher accuracy and shorter convergence. It will be very prospective in many location-based ocean services.
IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.
Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho
2016-02-05
Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacón, L., E-mail: chacon@lanl.gov; Chen, G.; Knoll, D.A.
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLOmore » approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
Li, Fangmin; Liu, Guo; Liu, Jian; Chen, Xiaochuang; Ma, Xiaolin
2016-10-28
Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices' random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy.
Processing of high-precision ceramic balls with a spiral V-groove plate
NASA Astrophysics Data System (ADS)
Feng, Ming; Wu, Yongbo; Yuan, Julong; Ping, Zhao
2017-03-01
As the demand for high-performance bearings gradually increases, ceramic balls with excellent properties, such as high accuracy, high reliability, and high chemical durability used, are extensively used for highperformance bearings. In this study, a spiral V-groove plate method is employed in processing high-precision ceramic balls. After the kinematic analysis of the ball-spin angle and enveloped lapping trajectories, an experimental rig is constructed and experiments are conducted to confirm the feasibility of this method. Kinematic analysis results indicate that the method not only allows for the control of the ball-spin angle but also uniformly distributes the enveloped lapping trajectories over the entire ball surface. Experimental results demonstrate that the novel spiral Vgroove plate method performs better than the conventional concentric V-groove plate method in terms of roundness, surface roughness, diameter difference, and diameter decrease rate. Ceramic balls with a G3-level accuracy are achieved, and their typical roundness, minimum surface roughness, and diameter difference are 0.05, 0.0045, and 0.105 μm, respectively. These findings confirm that the proposed method can be applied to high-accuracy and high-consistency ceramic ball processing.
High-Precision Distribution of Highly Stable Optical Pulse Trains with 8.8 × 10−19 instability
Ning, B.; Zhang, S. Y.; Hou, D.; Wu, J. T.; Li, Z. B.; Zhao, J. Y.
2014-01-01
The high-precision distribution of optical pulse trains via fibre links has had a considerable impact in many fields. In most published work, the accuracy is still fundamentally limited by unavoidable noise sources, such as thermal and shot noise from conventional photodiodes and thermal noise from mixers. Here, we demonstrate a new high-precision timing distribution system that uses a highly precise phase detector to obviously reduce the effect of these limitations. Instead of using photodiodes and microwave mixers, we use several fibre Sagnac-loop-based optical-microwave phase detectors (OM-PDs) to achieve optical-electrical conversion and phase measurements, thereby suppressing the sources of noise and achieving ultra-high accuracy. The results of a distribution experiment using a 10-km fibre link indicate that our system exhibits a residual instability of 2.0 × 10−15 at1 s and8.8 × 10−19 at 40,000 s and an integrated timing jitter as low as 3.8 fs in a bandwidth of 1 Hz to 100 kHz. This low instability and timing jitter make it possible for our system to be used in the distribution of optical-clock signals or in applications that require extremely accurate frequency/time synchronisation. PMID:24870442
An evaluation of a UAV guidance system with consumer grade GPS receivers
NASA Astrophysics Data System (ADS)
Rosenberg, Abigail Stella
Remote sensing has been demonstrated an important tool in agricultural and natural resource management and research applications, however there are limitations that exist with traditional platforms (i.e., hand held sensors, linear moves, vehicle mounted, airplanes, remotely piloted vehicles (RPVs), unmanned aerial vehicles (UAVs) and satellites). Rapid technological advances in electronics, computers, software applications, and the aerospace industry have dramatically reduced the cost and increased the availability of remote sensing technologies. Remote sensing imagery vary in spectral, spatial, and temporal resolutions and are available from numerous providers. Appendix A presented results of a test project that acquired high-resolution aerial photography with a RPV to map the boundary of a 0.42 km2 fire area. The project mapped the boundaries of the fire area from a mosaic of the aerial images collected and compared this with ground-based measurements. The project achieved a 92.4% correlation between the aerial assessment and the ground truth data. Appendix B used multi-objective analysis to quantitatively assess the tradeoffs between different sensor platform attributes to identify the best overall technology. Experts were surveyed to identify the best overall technology at three different pixel sizes. Appendix C evaluated the positional accuracy of a relatively low cost UAV designed for high resolution remote sensing of small areas in order to determine the positional accuracy of sensor readings. The study evaluated the accuracy and uncertainty of a UAV flight route with respect to the programmed waypoints and of the UAV's GPS position, respectively. In addition, the potential displacement of sensor data was evaluated based on (1) GPS measurements on board the aircraft and (2) the autopilot's circuit board with 3-axis gyros and accelerometers (i.e., roll, pitch, and yaw). The accuracies were estimated based on a 95% confidence interval or similar methods. The accuracy achieved in the second and third manuscripts demonstrates that reasonably priced, high resolution remote sensing via RPVs and UAVs is practical for agriculture and natural resource professionals.
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
ERIC Educational Resources Information Center
Gabbard, Ryan
2010-01-01
Understanding the syntactic structure of a sentence is a necessary preliminary to understanding its semantics and therefore for many practical applications. The field of natural language processing has achieved a high degree of accuracy in parsing, at least in English. However, the syntactic structures produced by the most commonly used parsers…
Gao, Kai; Huang, Lianjie
2017-08-31
The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Huang, Lianjie
The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...
2016-02-11
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
CA-125 AUC as a predictor for epithelial ovarian cancer relapse.
Mano, António; Falcão, Amílcar; Godinho, Isabel; Santos, Jorge; Leitão, Fátima; de Oliveira, Carlos; Caramona, Margarida
2008-01-01
The aim of the present work was to evaluate the usefulness of CA-125 normalized in time area under the curve (CA-125 AUC) to signalise epithelial ovarian cancer relapse. Data from a hundred and eleven patients were submitted to two different approaches based on CA-125 AUC increase values to predict patient relapse. In Criterion A total CA-125 AUC normalized in time value (AUC(i)) was compared with the immediately previous one (AUC(i-1)) using the formulae AUC(i) > or = F * AUC(i-1) (several F values were tested) to find the appropriate close related increment associated to patient relapse. In Criterion B total CA-125 AUC normalised in time was calculated and several cut-off values were correlated with patient relapse prediction capacity. In Criterion A the best accuracy was achieved with a factor (F) of 1.25 (increment of 25% from the previous status), while in Criterion B the best accuracies were achieved with cut-offs of 25, 50, 75 and 100 IU/mL. The mean lead time to relapse achieved with Criterion A was 181 days, while with Criterion B they were, respectively, 131, 111, 63 and 11 days. Based on our results we believe that conjugation and sequential application of both criteria in patient relapse detection should be highly advisable. CA-125 AUC rapid burst in asymptomatic patients should be firstly evaluated using Criterion A with a high accuracy (0.85) and with a substantial mean lead time to relapse (181 days). If a negative answer was obtained then Criterion B should performed to confirm the absence of relapse.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-06-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-01-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies. PMID:28401899
A new head phantom with realistic shape and spatially varying skull resistivity distribution.
Li, Jian-Bo; Tang, Chi; Dai, Meng; Liu, Geng; Shi, Xue-Tao; Yang, Bin; Xu, Can-Hua; Fu, Feng; You, Fu-Sheng; Tang, Meng-Xing; Dong, Xiu-Zhen
2014-02-01
Brain electrical impedance tomography (EIT) is an emerging method for monitoring brain injuries. To effectively evaluate brain EIT systems and reconstruction algorithms, we have developed a novel head phantom that features realistic anatomy and spatially varying skull resistivity. The head phantom was created with three layers, representing scalp, skull, and brain tissues. The fabrication process entailed 3-D printing of the anatomical geometry for mold creation followed by casting to ensure high geometrical precision and accuracy of the resistivity distribution. We evaluated the accuracy and stability of the phantom. Results showed that the head phantom achieved high geometric accuracy, accurate skull resistivity values, and good stability over time and in the frequency domain. Experimental impedance reconstructions performed using the head phantom and computer simulations were found to be consistent for the same perturbation object. In conclusion, this new phantom could provide a more accurate test platform for brain EIT research.
Genome sequencing in microfabricated high-density picolitre reactors.
Margulies, Marcel; Egholm, Michael; Altman, William E; Attiya, Said; Bader, Joel S; Bemben, Lisa A; Berka, Jan; Braverman, Michael S; Chen, Yi-Ju; Chen, Zhoutao; Dewell, Scott B; Du, Lei; Fierro, Joseph M; Gomes, Xavier V; Godwin, Brian C; He, Wen; Helgesen, Scott; Ho, Chun Heen; Ho, Chun He; Irzyk, Gerard P; Jando, Szilveszter C; Alenquer, Maria L I; Jarvie, Thomas P; Jirage, Kshama B; Kim, Jong-Bum; Knight, James R; Lanza, Janna R; Leamon, John H; Lefkowitz, Steven M; Lei, Ming; Li, Jing; Lohman, Kenton L; Lu, Hong; Makhijani, Vinod B; McDade, Keith E; McKenna, Michael P; Myers, Eugene W; Nickerson, Elizabeth; Nobile, John R; Plant, Ramona; Puc, Bernard P; Ronan, Michael T; Roth, George T; Sarkis, Gary J; Simons, Jan Fredrik; Simpson, John W; Srinivasan, Maithreyan; Tartaro, Karrie R; Tomasz, Alexander; Vogt, Kari A; Volkmer, Greg A; Wang, Shally H; Wang, Yong; Weiner, Michael P; Yu, Pengguang; Begley, Richard F; Rothberg, Jonathan M
2005-09-15
The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.
Discretizing singular point sources in hyperbolic wave propagation problems
Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...
2016-06-01
Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less
Small arms mini-fire control system: fiber-optic barrel deflection sensor
NASA Astrophysics Data System (ADS)
Rajic, S.; Datskos, P.; Lawrence, W.; Marlar, T.; Quinton, B.
2012-06-01
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight, weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
a Gsa-Svm Hybrid System for Classification of Binary Problems
NASA Astrophysics Data System (ADS)
Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan
2011-06-01
This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.
Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei
2017-06-01
To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC <0.7). The CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
On-the-fly Locata/inertial navigation system integration for precise maritime application
NASA Astrophysics Data System (ADS)
Jiang, Wei; Li, Yong; Rizos, Chris
2013-10-01
The application of Global Navigation Satellite System (GNSS) technology has meant that marine navigators have greater access to a more consistent and accurate positioning capability than ever before. However, GNSS may not be able to meet all emerging navigation performance requirements for maritime applications with respect to service robustness, accuracy, integrity and availability. In particular, applications in port areas (for example automated docking) and in constricted waterways, have very stringent performance requirements. Even when an integrated inertial navigation system (INS)/GNSS device is used there may still be performance gaps. GNSS signals are easily blocked or interfered with, and sometimes the satellite geometry may not be good enough for high accuracy and high reliability applications. Furthermore, the INS accuracy degrades rapidly during GNSS outages. This paper investigates the use of a portable ground-based positioning system, known as ‘Locata’, which was integrated with an INS, to provide accurate navigation in a marine environment without reliance on GNSS signals. An ‘on-the-fly’ Locata resolution algorithm that takes advantage of geometry change via an extended Kalman filter is proposed in this paper. Single-differenced Locata carrier phase measurements are utilized to achieve accurate and reliable solutions. A ‘loosely coupled’ decentralized Locata/INS integration architecture based on the Kalman filter is used for data processing. In order to evaluate the system performance, a field trial was conducted on Sydney Harbour. A Locata network consisting of eight Locata transmitters was set up near the Sydney Harbour Bridge. The experiment demonstrated that the Locata on-the-fly (OTF) algorithm is effective and can improve the system accuracy in comparison with the conventional ‘known point initialization’ (KPI) method. After the OTF and KPI comparison, the OTF Locata/INS integration is then assessed further and its performance improvement on both stand-alone OTF Locata and INS is shown. The Locata/INS integration can achieve centimetre-level accuracy for position solutions, and centimetre-per-second accuracy for velocity determination.
NASA Astrophysics Data System (ADS)
Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George
2018-07-01
Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Parallelism measurement for base plate of standard artifact with multiple tactile approaches
NASA Astrophysics Data System (ADS)
Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie
2018-01-01
Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Čársky, Petr; Čurík, Roman; Varga, Štefan
2012-03-21
The objective of this paper is to show that the density fitting (resolution of the identity approximation) can also be applied to Coulomb integrals of the type (k(1)(1)k(2)(1)|g(1)(2)g(2)(2)), where k and g symbols refer to plane-wave functions and gaussians, respectively. We have shown how to achieve the accuracy of these integrals that is needed in wave-function MO and density functional theory-type calculations using mixed Gaussian and plane-wave basis sets. The crucial issues for achieving such a high accuracy are application of constraints for conservation of the number electrons and components of the dipole moment, optimization of the auxiliary basis set, and elimination of round-off errors in the matrix inversion. © 2012 American Institute of Physics
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael; Loubère, Raphaël; Maire, Pierre-Henri
2018-04-01
In this paper we develop a conservative cell-centered Lagrangian finite volume scheme for the solution of the hydrodynamics equations on unstructured multidimensional grids. The method is derived from the Eucclhyd scheme discussed in [47,43,45]. It is second-order accurate in space and is combined with the a posteriori Multidimensional Optimal Order Detection (MOOD) limiting strategy to ensure robustness and stability at shock waves. Second-order of accuracy in time is achieved via the ADER (Arbitrary high order schemes using DERivatives) approach. A large set of numerical test cases is proposed to assess the ability of the method to achieve effective second order of accuracy on smooth flows, maintaining an essentially non-oscillatory behavior on discontinuous profiles, general robustness ensuring physical admissibility of the numerical solution, and precision where appropriate.
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Compensatory neurofuzzy model for discrete data classification in biomedical
NASA Astrophysics Data System (ADS)
Ceylan, Rahime
2015-03-01
Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
Predicting drug-induced liver injury using ensemble learning methods and molecular fingerprints.
Ai, Haixin; Chen, Wen; Zhang, Li; Huang, Liangchao; Yin, Zimo; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng
2018-05-21
Drug-induced liver injury (DILI) is a major safety concern in the drug-development process, and various methods have been proposed to predict the hepatotoxicity of compounds during the early stages of drug trials. In this study, we developed an ensemble model using three machine learning algorithms and 12 molecular fingerprints from a dataset containing 1,241 diverse compounds. The ensemble model achieved an average accuracy of 71.1±2.6%, sensitivity of 79.9±3.6%, specificity of 60.3±4.8%, and area under the receiver operating characteristic curve (AUC) of 0.764±0.026 in five-fold cross-validation and an accuracy of 84.3%, sensitivity of 86.9%, specificity of 75.4%, and AUC of 0.904 in an external validation dataset of 286 compounds collected from the Liver Toxicity Knowledge Base (LTKB). Compared with previous methods, the ensemble model achieved relatively high accuracy and sensitivity. We also identified several substructures related to DILI. In addition, we provide a web server offering access to our models (http://ccsipb.lnu.edu.cn/toxicity/HepatoPred-EL/).
USDA-ARS?s Scientific Manuscript database
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...
Multiscale high-order/low-order (HOLO) algorithms and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
A new ultra-high-accuracy angle generator: current status and future direction
NASA Astrophysics Data System (ADS)
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Han, Xuesong; Zhu, Haihong; Nie, Xiaojia; Wang, Guoqing; Zeng, Xiaoyan
2018-01-01
AlSi10Mg inclined struts with angle of 45° were fabricated by selective laser melting (SLM) using different scanning speed and hatch spacing to gain insight into the evolution of the molten pool morphology, surface roughness, and dimensional accuracy. The results show that the average width and depth of the molten pool, the lower surface roughness and dimensional deviation decrease with the increase of scanning speed and hatch spacing. The upper surface roughness is found to be almost constant under different processing parameters. The width and depth of the molten pool on powder-supported zone are larger than that of the molten pool on the solid-supported zone, while the width changes more significantly than that of depth. However, if the scanning speed is high enough, the width and depth of the molten pool and the lower surface roughness almost keep constant as the density is still high. Therefore, high dimensional accuracy and density as well as good surface quality can be achieved simultaneously by using high scanning speed during SLMed cellular lattice strut. PMID:29518900
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Celaya-Padilla, José; Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Galvan-Tejada, Jorge; Treviño, Victor; Tamez-Peña, José
2015-01-01
Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. This work presents a computer-aided diagnosis (CADx) method aimed to automatically triage mammogram sets. The method coregisters the left and right mammograms, extracts image features, and classifies the subjects into risk of having malignant calcifications (CS), malignant masses (MS), and healthy subject (HS). In this study, 449 subjects (197 CS, 207 MS, and 45 HS) from a public database were used to train and evaluate the CADx. Percentile-rank (p-rank) and z-normalizations were used. For the p-rank, the CS versus HS model achieved a cross-validation accuracy of 0.797 with an area under the receiver operating characteristic curve (AUC) of 0.882; the MS versus HS model obtained an accuracy of 0.772 and an AUC of 0.842. For the z-normalization, the CS versus HS model achieved an accuracy of 0.825 with an AUC of 0.882 and the MS versus HS model obtained an accuracy of 0.698 and an AUC of 0.807. The proposed method has the potential to rank cases with high probability of malignant findings aiding in the prioritization of radiologists work list. PMID:26240818
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc
2010-02-01
This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair
NASA Astrophysics Data System (ADS)
Sasou, Akira; Kojima, Hiroaki
2009-12-01
Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.
Measurement configuration optimization for dynamic metrology using Stokes polarimetry
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan
2018-05-01
As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.
Models in animal collective decision-making: information uncertainty and conflicting preferences
Conradt, Larissa
2012-01-01
Collective decision-making plays a central part in the lives of many social animals. Two important factors that influence collective decision-making are information uncertainty and conflicting preferences. Here, I bring together, and briefly review, basic models relating to animal collective decision-making in situations with information uncertainty and in situations with conflicting preferences between group members. The intention is to give an overview about the different types of modelling approaches that have been employed and the questions that they address and raise. Despite the use of a wide range of different modelling techniques, results show a coherent picture, as follows. Relatively simple cognitive mechanisms can lead to effective information pooling. Groups often face a trade-off between decision accuracy and speed, but appropriate fine-tuning of behavioural parameters could achieve high accuracy while maintaining reasonable speed. The right balance of interdependence and independence between animals is crucial for maintaining group cohesion and achieving high decision accuracy. In conflict situations, a high degree of decision-sharing between individuals is predicted, as well as transient leadership and leadership according to needs and physiological status. Animals often face crucial trade-offs between maintaining group cohesion and influencing the decision outcome in their own favour. Despite the great progress that has been made, there remains one big gap in our knowledge: how do animals make collective decisions in situations when information uncertainty and conflict of interest operate simultaneously? PMID:23565335
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
Kermani, Bahram G
2016-07-01
Crystal Genetics, Inc. is an early-stage genetic test company, focused on achieving the highest possible clinical-grade accuracy and comprehensiveness for detecting germline (e.g., in hereditary cancer) and somatic (e.g., in early cancer detection) mutations. Crystal's mission is to significantly improve the health status of the population, by providing high accuracy, comprehensive, flexible and affordable genetic tests, primarily in cancer. Crystal's philosophy is that when it comes to detecting mutations that are strongly correlated with life-threatening diseases, the detection accuracy of every single mutation counts: a single false-positive error could cause severe anxiety for the patient. And, more importantly, a single false-negative error could potentially cost the patient's life. Crystal's objective is to eliminate both of these error types.
Direct Geolocation of TerraSAR-X Spotlight Mode Image and Error Correction
NASA Astrophysics Data System (ADS)
Zhou, Xiao; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa; Gong, Lixia
2013-01-01
The GERMAN TerraSAR-X mission was launched in June 2007, operating a versatile new-generation SAR sensor in X-band. Its Spotlight mode providing SAR images at very high resolution of about 1m. The product’s specified 3-D geolocation accuracy is tightened to 1m according to the official technical report. However, this accuracy is able to be achieved relies on not only robust mathematical basis of SAR geolocation, but also well knowledge of error sources and their correction. The research focuses on geolocation of TerraSAR-X spotlight image. Mathematical model and resolving algorithms have been analyzed. Several error sources have been researched and corrected especially. The effectiveness and accuracy of the research was verified by the experiment results.
Hu, Leland S; Ning, Shuluo; Eschbacher, Jennifer M; Gaw, Nathan; Dueck, Amylou C; Smith, Kris A; Nakaji, Peter; Plasencia, Jonathan; Ranjbar, Sara; Price, Stephen J; Tran, Nhan; Loftus, Joseph; Jenkins, Robert; O'Neill, Brian P; Elmquist, William; Baxter, Leslie C; Gao, Fei; Frakes, David; Karis, John P; Zwart, Christine; Swanson, Kristin R; Sarkaria, Jann; Wu, Teresa; Mitchell, J Ross; Li, Jing
2015-01-01
Genetic profiling represents the future of neuro-oncology but suffers from inadequate biopsies in heterogeneous tumors like Glioblastoma (GBM). Contrast-enhanced MRI (CE-MRI) targets enhancing core (ENH) but yields adequate tumor in only ~60% of cases. Further, CE-MRI poorly localizes infiltrative tumor within surrounding non-enhancing parenchyma, or brain-around-tumor (BAT), despite the importance of characterizing this tumor segment, which universally recurs. In this study, we use multiple texture analysis and machine learning (ML) algorithms to analyze multi-parametric MRI, and produce new images indicating tumor-rich targets in GBM. We recruited primary GBM patients undergoing image-guided biopsies and acquired pre-operative MRI: CE-MRI, Dynamic-Susceptibility-weighted-Contrast-enhanced-MRI, and Diffusion Tensor Imaging. Following image coregistration and region of interest placement at biopsy locations, we compared MRI metrics and regional texture with histologic diagnoses of high- vs low-tumor content (≥80% vs <80% tumor nuclei) for corresponding samples. In a training set, we used three texture analysis algorithms and three ML methods to identify MRI-texture features that optimized model accuracy to distinguish tumor content. We confirmed model accuracy in a separate validation set. We collected 82 biopsies from 18 GBMs throughout ENH and BAT. The MRI-based model achieved 85% cross-validated accuracy to diagnose high- vs low-tumor in the training set (60 biopsies, 11 patients). The model achieved 81.8% accuracy in the validation set (22 biopsies, 7 patients). Multi-parametric MRI and texture analysis can help characterize and visualize GBM's spatial histologic heterogeneity to identify regional tumor-rich biopsy targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
2012-07-15
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less
Brillouin zone grid refinement for highly resolved ab initio THz optical properties of graphene
NASA Astrophysics Data System (ADS)
Warmbier, Robert; Quandt, Alexander
2018-07-01
Optical spectra of materials can in principle be calculated within numerical frameworks based on Density Functional Theory. The huge numerical effort involved in these methods severely constraints the accuracy achievable in practice. In the case of the THz spectrum of graphene the primary limitation lays in the density of the reciprocal space sampling. In this letter we have developed a non-uniform sampling using grid refinement to achieve a high local sampling density with only moderate numerical effort. The resulting THz electron energy loss spectrum shows a plasmon signal below 50 meV with a ω(q) ∝√{ q } dispersion relation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhou; Tu, Juan; Cheng, Jianchun
An acoustic focusing lens incorporated with periodically aligned subwavelength grooves corrugated on its spherical surface has been developed. It is demonstrated theoretically and experimentally that acoustic focusing achieved by using the lens can suppress the relative side-lobe amplitudes, enhance the focal gain, and minimize the shifting of the focus. Use of the lens coupled with a planar ultrasound transducer can generate an ultrasound beam with enhanced acoustic transmission and collimation effect, which offers the capability of improving the safety, efficiency, and accuracy of targeted surgery implemented by high intensity focused ultrasound.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
Teachers' Judgements of Students' Foreign-Language Achievement
ERIC Educational Resources Information Center
Zhu, Mingjing; Urhahne, Detlef
2015-01-01
Numerous studies have been conducted on the accuracy of teacher judgement in different educational areas such as mathematics, language arts and reading. Teacher judgement of students' foreign-language achievement, however, has been rarely investigated. The study aimed to examine the accuracy of teacher judgement of students' foreign-language…
High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range
de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander
2016-01-01
Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334
Alshami, Iyad Husni; Sahibuddin, Shamsul; Firdaus, Firdaus
2017-01-01
The Global Positioning System demonstrates the significance of Location Based Services but it cannot be used indoors due to the lack of line of sight between satellites and receivers. Indoor Positioning Systems are needed to provide indoor Location Based Services. Wireless LAN fingerprints are one of the best choices for Indoor Positioning Systems because of their low cost, and high accuracy, however they have many drawbacks: creating radio maps is time consuming, the radio maps will become outdated with any environmental change, different mobile devices read the received signal strength (RSS) differently, and peoples’ presence in LOS between access points and mobile device affects the RSS. This research proposes a new Adaptive Indoor Positioning System model (called DIPS) based on: a dynamic radio map generator, RSS certainty technique and peoples’ presence effect integration for dynamic and multi-floor environments. Dynamic in our context refers to the effects of people and device heterogeneity. DIPS can achieve 98% and 92% positioning accuracy for floor and room positioning, and it achieves 1.2 m for point positioning error. RSS certainty enhanced the positioning accuracy for floor and room for different mobile devices by 11% and 9%. Then by considering the peoples’ presence effect, the error is reduced by 0.2 m. In comparison with other works, DIPS achieves better positioning without extra devices. PMID:28783047
NASA Astrophysics Data System (ADS)
Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun
2018-03-01
Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a multi-dimensional scheme with incorporation of both normal and tangential spatial derivatives of flow variables at a cell interface in the flux evaluation. The scheme can be extended straightforwardly to viscous flow computation in unstructured mesh. It provides a promising direction for the development of high-order CFD methods for the computation of complex flows, such as turbulence and acoustics with shock interactions.
Zhang, Xu; Li, Yun; Chen, Xiang; Li, Guanglin; Rymer, William Zev; Zhou, Ping
2013-01-01
This study investigates the effect of involuntary motor activity of paretic-spastic muscles on classification of surface electromyography (EMG) signals. Two data collection sessions were designed for 8 stroke subjects to voluntarily perform 11 functional movements using their affected forearm and hand at a relatively slow and fast speed. For each stroke subject, the degree of involuntary motor activity present in voluntary surface EMG recordings was qualitatively described from such slow and fast experimental protocols. Myoelectric pattern recognition analysis was performed using different combinations of voluntary surface EMG data recorded from slow and fast sessions. Across all tested stroke subjects, our results revealed that when involuntary surface EMG was absent or present in both training and testing datasets, high accuracies (> 96%, > 98%, respectively, averaged over all the subjects) can be achieved in classification of different movements using surface EMG signals from paretic muscles. When involuntary surface EMG was solely involved in either training or testing datasets, the classification accuracies were dramatically reduced (< 89%, < 85%, respectively). However, if both training and testing datasets contained EMG signals with presence and absence of involuntary EMG interference, high accuracies were still achieved (> 97%). The findings of this study can be used to guide appropriate design and implementation of myoelectric pattern recognition based systems or devices toward promoting robot-aided therapy for stroke rehabilitation. PMID:23860192
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Dong, E-mail: d.qiu@uq.edu.au; Zhang, Mingxing
2014-08-15
A simple and inclusive method is proposed for accurate determination of the habit plane between bicrystals in transmission electron microscope. Whilst this method can be regarded as a variant of surface trace analysis, the major innovation lies in the improved accuracy and efficiency of foil thickness measurement, which involves a simple tilt of the thin foil about a permanent tilting axis of the specimen holder, rather than cumbersome tilt about the surface trace of the habit plane. Experimental study has been done to validate this proposed method in determining the habit plane between lamellar α{sub 2} plates and γ matrixmore » in a Ti–Al–Nb alloy. Both high accuracy (± 1°) and high precision (± 1°) have been achieved by using the new method. The source of the experimental errors as well as the applicability of this method is discussed. Some tips to minimise the experimental errors are also suggested. - Highlights: • An improved algorithm is formulated to measure the foil thickness. • Habit plane can be determined with a single tilt holder based on the new algorithm. • Better accuracy and precision within ± 1° are achievable using the proposed method. • The data for multi-facet determination can be collected simultaneously.« less
Rafii, Mahvash
2004-11-01
MR imaging of the shoulder without contrast is frequently used for evaluation of glenohumeral instability in spite of the popularity of MR arthrography. With proper imaging technique, familiarity with normal anatomy and variants as well as knowledge of the expected pathologic findings high diagnostic accuracy may be achieved.
Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications
Zhou, Zhongmei; Wang, Weiping
2014-01-01
The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304
NASA Astrophysics Data System (ADS)
Agrawal, Ritu; Sharma, Manisha; Singh, Bikesh Kumar
2018-04-01
Manual segmentation and analysis of lesions in medical images is time consuming and subjected to human errors. Automated segmentation has thus gained significant attention in recent years. This article presents a hybrid approach for brain lesion segmentation in different imaging modalities by combining median filter, k means clustering, Sobel edge detection and morphological operations. Median filter is an essential pre-processing step and is used to remove impulsive noise from the acquired brain images followed by k-means segmentation, Sobel edge detection and morphological processing. The performance of proposed automated system is tested on standard datasets using performance measures such as segmentation accuracy and execution time. The proposed method achieves a high accuracy of 94% when compared with manual delineation performed by an expert radiologist. Furthermore, the statistical significance test between lesion segmented using automated approach and that by expert delineation using ANOVA and correlation coefficient achieved high significance values of 0.986 and 1 respectively. The experimental results obtained are discussed in lieu of some recently reported studies.
Li, Shasha; Zhou, Zhongmei; Wang, Weiping
2014-01-01
The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.
Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.
Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa
2014-06-01
Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.
Schizophrenia classification using functional network features
NASA Astrophysics Data System (ADS)
Rish, Irina; Cecchi, Guillermo A.; Heuton, Kyle
2012-03-01
This paper focuses on discovering statistical biomarkers (features) that are predictive of schizophrenia, with a particular focus on topological properties of fMRI functional networks. We consider several network properties, such as node (voxel) strength, clustering coefficients, local efficiency, as well as just a subset of pairwise correlations. While all types of features demonstrate highly significant statistical differences in several brain areas, and close to 80% classification accuracy, the most remarkable results of 93% accuracy are achieved by using a small subset of only a dozen of most-informative (lowest p-value) correlation features. Our results suggest that voxel-level correlations and functional network features derived from them are highly informative about schizophrenia and can be used as statistical biomarkers for the disease.
Cooperative angle-only orbit initialization via fusion of admissible areas
NASA Astrophysics Data System (ADS)
Jia, Bin; Pham, Khanh; Blasch, Erik; Chen, Genshe; Shen, Dan; Wang, Zhonghai
2017-05-01
For the short-arc angle only orbit initialization problem, the admissible area is often used. However, the accuracy using a single sensor is often limited. For high value space objects, it is desired to achieve more accurate results. Fortunately, multiple sensors, which are dedicated to space situational awareness, are available. The work in this paper uses multiple sensors' information to cooperatively initialize the orbit based on the fusion of multiple admissible areas. Both the centralized fusion and decentralized fusion are discussed. Simulation results verify the expectation that the orbit initialization accuracy is improved by using information from multiple sensors.
New more accurate calculations of the ground state potential energy surface of H(3) (+).
Pavanello, Michele; Tung, Wei-Cheng; Leonarski, Filip; Adamowicz, Ludwik
2009-02-21
Explicitly correlated Gaussian functions with floating centers have been employed to recalculate the ground state potential energy surface (PES) of the H(3) (+) ion with much higher accuracy than it was done before. The nonlinear parameters of the Gaussians (i.e., the exponents and the centers) have been variationally optimized with a procedure employing the analytical gradient of the energy with respect to these parameters. The basis sets for calculating new PES points were guessed from the points already calculated. This allowed us to considerably speed up the calculations and achieve very high accuracy of the results.
A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs.
Wang, Siying; Pohl, Antje; Jaeschke, Timo; Czaplik, Michael; Köny, Marcus; Leonhardt, Steffen; Pohl, Nils
2015-01-01
In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO(2) measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations.
Autonomous Relative Navigation for Formation-Flying Satellites Using GPS
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, J. Russell; Long, Anne; Kelbel, David; Lee, Taesul
2000-01-01
The Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for a formation of four eccentric, medium-altitude Earth-orbiting satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) and "GPS-like " intersatellite measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that an autonomous relative navigation position accuracy of 1meter root-mean-square can be achieved by differencing high-accuracy filtered solutions if only measurements from common GPS space vehicles are used in the independently estimated solutions.
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Autonomous Navigation Improvements for High-Earth Orbiters Using GPS
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Garrison, James; Carpenter, J. Russell; Bauer, F. (Technical Monitor)
2000-01-01
The Goddard Space Flight Center is currently developing autonomous navigation systems for satellites in high-Earth orbits where acquisition of the GPS signals is severely limited This paper discusses autonomous navigation improvements for high-Earth orbiters and assesses projected navigation performance for these satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) measurements. Navigation performance is evaluated as a function of signal acquisition threshold, measurement errors, and dynamic modeling errors using realistic GPS signal strength and user antenna models. These analyses indicate that an autonomous navigation position accuracy of better than 30 meters root-mean-square (RMS) can be achieved for high-Earth orbiting satellites using a GPS receiver with a very stable oscillator. This accuracy improves to better than 15 meters RMS if the GPS receiver's signal acquisition threshold can be reduced by 5 dB-Hertz to track weaker signals.
A Low-Visibility Force Multiplier: Assessing China’s Cruise Missile Ambitions
2014-04-01
terminal sensor to achieve 10–15 meter (m) accuracy. • The second-generation DH-10 has a GPS/inertial guidance system but may also use terrain...contour mapping for redundant midcourse guidance and a digital scene-matching sensor to permit an accuracy of 10 m. • Development of the Chinese Beidou...pictures of the target as seen from different perspectives. DSMAC permits LACMs to achieve accuracies of about 1 m. Other (for example, thermal) sensors
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
Local staging and assessment of colon cancer with 1.5-T magnetic resonance imaging
Blake, Helena; Jeyadevan, Nelesh; Abulafi, Muti; Swift, Ian; Toomey, Paul; Brown, Gina
2016-01-01
Objective: The aim of this study was to assess the accuracy of 1.5-T MRI in the pre-operative local T and N staging of colon cancer and identification of extramural vascular invasion (EMVI). Methods: Between 2010 and 2012, 60 patients with adenocarcinoma of the colon were prospectively recruited at 2 centres. 55 patients were included for final analysis. Patients received pre-operative 1.5-T MRI with high-resolution T2 weighted, gadolinium-enhanced T1 weighted and diffusion-weighted images. These were blindly assessed by two expert radiologists. Accuracy of the T-stage, N-stage and EMVI assessment was evaluated using post-operative histology as the gold standard. Results: Results are reported for two readers. Identification of T3 disease demonstrated an accuracy of 71% and 51%, sensitivity of 74% and 42% and specificity of 74% and 83%. Identification of N1 disease demonstrated an accuracy of 57% for both readers, sensitivity of 26% and 35% and specificity of 81% and 74%. Identification of EMVI demonstrated an accuracy of 74% and 69%, sensitivity 63% and 26% and specificity 80% and 91%. Conclusion: 1.5-T MRI achieved a moderate accuracy in the local evaluation of colon cancer, but cannot be recommended to replace CT on the basis of this study. Advances in knowledge: This study confirms that MRI is a viable alternative to CT for the local assessment of colon cancer, but this study does not reproduce the very high accuracy reported in the only other study to assess the accuracy of MRI in colon cancer staging. PMID:27226219
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
Development and Performance of an Atomic Interferometer Gravity Gradiometer for Earth Science
NASA Astrophysics Data System (ADS)
Luthcke, S. B.; Saif, B.; Sugarbaker, A.; Rowlands, D. D.; Loomis, B.
2016-12-01
The wealth of multi-disciplinary science achieved from the GRACE mission, the commitment to GRACE Follow On (GRACE-FO), and Resolution 2 from the International Union of Geodesy and Geophysics (IUGG, 2015), highlight the importance to implement a long-term satellite gravity observational constellation. Such a constellation would measure time variable gravity (TVG) with accuracies 50 times better than the first generation missions, at spatial and temporal resolutions to support regional and sub-basin scale multi-disciplinary science. Improved TVG measurements would achieve significant societal benefits including: forecasting of floods and droughts, improved estimates of climate impacts on water cycle and ice sheets, coastal vulnerability, land management, risk assessment of natural hazards, and water management. To meet the accuracy and resolution challenge of the next generation gravity observational system, NASA GSFC and AOSense are currently developing an Atomic Interferometer Gravity Gradiometer (AIGG). This technology is capable of achieving the desired accuracy and resolution with a single instrument, exploiting the advantages of the microgravity environment. The AIGG development is funded under NASA's Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP), and includes the design, build, and testing of a high-performance, single-tensor-component gravity gradiometer for TVG recovery from a satellite in low Earth orbit. The sensitivity per shot is 10-5 Eötvös (E) with a flat spectral bandwidth from 0.3 mHz - 0.03 Hz. Numerical simulations show that a single space-based AIGG in a 326 km altitude polar orbit is capable of exceeding the IUGG target requirement for monthly TVG accuracy of 1 cm equivalent water height at 200 km resolution. We discuss the current status of the AIGG IIP development and estimated instrument performance, and we present results of simulated Earth TVG recovery of the space-based AIGG. We explore the accuracy, and spatial and temporal resolution of surface mass change observations from several space-based implementations of the AIGG instrument, including various orbit configurations and multi-satellite/multi-orbit configurations.
Manufacture of ultra high precision aerostatic bearings based on glass guide
NASA Astrophysics Data System (ADS)
Guo, Meng; Dai, Yifan; Peng, Xiaoqiang; Tie, Guipeng; Lai, Tao
2017-10-01
The aerostatic guide in the traditional three-coordinate measuring machine and profilometer generally use metal or ceramics material. Limited by the guide processing precision, the measurement accuracy of these traditional instruments is around micro-meter level. By selection of optical materials as guide material, optical processing method and laser interference measurement can be introduced to the traditional aerostatic bearings manufacturing field. By using the large aperture wave-front interference measuring equipment , the shape and position error of the glass guide can be obtained in high accuracy and then it can be processed to 0.1μm or even better with the aid of Magnetorheological Finishing(MRF) and Computer Controlled Optical Surfacing (CCOS) process and other modern optical processing method, so the accuracy of aerostatic bearings can be fundamentally improved and ultra high precision coordinate measuring can be achieved. This paper introduces the fabrication and measurement process of the glass guide by K9 with 300mm measuring range, and its working surface accuracy is up to 0.1μm PV, the verticality and parallelism error between the two guide rail face is better than 2μm, and the straightness of the aerostatic bearings by this K9 glass guide is up to 40nm after error compensation.
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.
2015-12-01
Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean ± SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.
D Modelling with the Samsung Gear 360
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-02-01
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
High-Reflectivity Multi-Layer Coatings for the CLASP Sounding Rocket Project
NASA Technical Reports Server (NTRS)
Narukage, Noriyuki; Kano, Ryohei; Bando, Takamasa; Ishikawa, Ryoko; Kubo, Masahito; Katsukawa, Yukio; Ishikawa, Shin-nosuke; Kobiki, Toshihiko; Giono, Gabriel; Auchere, Frederic;
2015-01-01
We are planning an international rocket experiment Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is (2015 planned) that Lyman alpha line (Ly alpha line) polarization spectroscopic observations from the sun. The purpose of this experiment, detected with high accuracy of the linear polarization of the Ly alpha lines to 0.1% by using a Hanle effect is to measure the magnetic field of the chromosphere-transition layer directly. For polarization photometric accuracy achieved that approximately 0.1% required for CLASP, it is necessary to realize the monitoring device with a high throughput. On the other hand, Ly alpha line (vacuum ultraviolet rays) have a sensitive characteristics that is absorbed by the material. We therefore set the optical system of the reflection system (transmission only the wavelength plate), each of the mirrors, subjected to high efficiency of the multilayer coating in accordance with the role. Primary mirror diameter of CLASP is about 30 cm, the amount of heat about 30,000 J is about 5 minutes of observation time is coming mainly in the visible light to the telescope. In addition, total flux of the sun visible light overwhelmingly large and about 200 000 times the Ly alpha line wavelength region. Therefore, in terms of thermal management and 0.1% of the photometric measurement accuracy achieved telescope, elimination of the visible light is essential. We therefore, has a high reflectivity (greater than 50%) in Ly alpha line, visible light is a multilayer coating be kept to a low reflectance (less than 5%) (cold mirror coating) was applied to the primary mirror. On the other hand, the efficiency of the polarization analyzer required chromospheric magnetic field measurement (the amount of light) Conventional (magnesium fluoride has long been known as a material for vacuum ultraviolet (MgF2) manufactured ellipsometer; Rs = 22%) about increased to 2.5 times were high efficiency reflective polarizing element analysis. This device, Bridou et al. (2011) is proposed "that is coated with a thin film of the substrate MgF2 and SiO2 fused silica." As a result of the measurement, Rs = 54.5%, to achieve a Rp = 0.3%, high efficiency, of course, capable of taking out only about spolarized light. Other reflective optical elements (the secondary mirror, the diffraction grating-collector mirror), subjected to high-reflection coating of Al + MgF2 (reflectance of about 80%), less than 5% in the entire optical system by these (CCD Science was achieved a high throughput as a device for a vacuum ultraviolet ray of the entire system less than 5% (CCD of QE is not included).
High-accuracy calculations of the rotation-vibration spectrum of {{\\rm{H}}}_{3}^{+}
NASA Astrophysics Data System (ADS)
Tennyson, Jonathan; Polyansky, Oleg L.; Zobov, Nikolai F.; Alijah, Alexander; Császár, Attila G.
2017-12-01
Calculation of the rotation-vibration spectrum of {{{H}}}3+, as well as of its deuterated isotopologues, with near-spectroscopic accuracy requires the development of sophisticated theoretical models, methods, and codes. The present paper reviews the state-of-the-art in these fields. Computation of rovibrational states on a given potential energy surface (PES) has now become standard for triatomic molecules, at least up to intermediate energies, due to developments achieved by the present authors and others. However, highly accurate Born-Oppenheimer energies leading to highly accurate PESs are not accessible even for this two-electron system using conventional electronic structure procedures (e.g. configuration-interaction or coupled-cluster techniques with extrapolation to the complete (atom-centered Gaussian) basis set limit). For this purpose, highly specialized techniques must be used, e.g. those employing explicitly correlated Gaussians and nonlinear parameter optimizations. It has also become evident that a very dense grid of ab initio points is required to obtain reliable representations of the computed points extending from the minimum to the asymptotic limits. Furthermore, adiabatic, relativistic, and quantum electrodynamic correction terms need to be considered to achieve near-spectroscopic accuracy during calculation of the rotation-vibration spectrum of {{{H}}}3+. The remaining and most intractable problem is then the treatment of the effects of non-adiabatic coupling on the rovibrational energies, which, in the worst cases, may lead to corrections on the order of several cm-1. A promising way of handling this difficulty is the further development of effective, motion- or even coordinate-dependent, masses and mass surfaces. Finally, the unresolved challenge of how to describe and elucidate the experimental pre-dissociation spectra of {{{H}}}3+ and its isotopologues is discussed.
Achieving Climate Change Absolute Accuracy in Orbit
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.;
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
Improving transmembrane protein consensus topology prediction using inter-helical interaction.
Wang, Han; Zhang, Chao; Shi, Xiaohu; Zhang, Li; Zhou, You
2012-11-01
Alpha helix transmembrane proteins (αTMPs) represent roughly 30% of all open reading frames (ORFs) in a typical genome and are involved in many critical biological processes. Due to the special physicochemical properties, it is hard to crystallize and obtain high resolution structures experimentally, thus, sequence-based topology prediction is highly desirable for the study of transmembrane proteins (TMPs), both in structure prediction and function prediction. Various model-based topology prediction methods have been developed, but the accuracy of those individual predictors remain poor due to the limitation of the methods or the features they used. Thus, the consensus topology prediction method becomes practical for high accuracy applications by combining the advances of the individual predictors. Here, based on the observation that inter-helical interactions are commonly found within the transmembrane helixes (TMHs) and strongly indicate the existence of them, we present a novel consensus topology prediction method for αTMPs, CNTOP, which incorporates four top leading individual topology predictors, and further improves the prediction accuracy by using the predicted inter-helical interactions. The method achieved 87% prediction accuracy based on a benchmark dataset and 78% accuracy based on a non-redundant dataset which is composed of polytopic αTMPs. Our method derives the highest topology accuracy than any other individual predictors and consensus predictors, at the same time, the TMHs are more accurately predicted in their length and locations, where both the false positives (FPs) and the false negatives (FNs) decreased dramatically. The CNTOP is available at: http://ccst.jlu.edu.cn/JCSB/cntop/CNTOP.html. Copyright © 2012 Elsevier B.V. All rights reserved.
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope
NASA Astrophysics Data System (ADS)
Miyamura, Norihide
2017-09-01
For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.
An experimental apparatus for diffraction-limited soft x-ray nano-focusing
NASA Astrophysics Data System (ADS)
Merthe, Daniel J.; Goldberg, Kenneth A.; Yashchuk, Valeriy V.; Yuan, Sheng; McKinney, Wayne R.; Celestre, Richard; Mochi, Iacopo; Macdougall, James; Morrison, Gregory Y.; Rakawa, Senajith B.; Anderson, Erik; Smith, Brian V.; Domning, Edward E.; Warwick, Tony; Padmore, Howard
2011-09-01
Realizing the experimental potential of high-brightness, next generation synchrotron and free-electron laser light sources requires the development of reflecting x-ray optics capable of wavefront preservation and high-resolution nano-focusing. At the Advanced Light Source (ALS) beamline 5.3.1, we are developing broadly applicable, high-accuracy, in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for diffraction-limited Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of wavefront-sensing tests with increasing accuracy and sensitivity, including scanning-slit Hartmann tests, grating-based lateral shearing interferometry, and quantitative knife-edge testing. We describe the original experimental techniques and alignment methodology that have enabled us to optimally set a bendable KB mirror to achieve a focused, FWHM spot size of 150 nm, with 1 nm (1.24 keV) photons at 3.7 mrad numerical aperture. The predictions of wavefront measurement are confirmed by the knife-edge testing. The side-profiled elliptically bent mirror used in these one-dimensional focusing experiments was originally designed for a much different glancing angle and conjugate distances. Visible-light long-trace profilometry was used to pre-align the mirror before installation at the beamline. This work demonstrates that high-accuracy, at-wavelength wavefront-slope feedback can be used to optimize the pitch, roll, and mirror-bending forces in situ, using procedures that are deterministic and repeatable.
Force monitoring transducers with more than 100,000 scale intervals
NASA Astrophysics Data System (ADS)
Stavrov, Vladimir; Shulev, Assen; Chakarov, Dimiter; Stavreva, Galina
2015-05-01
This paper presents the results obtained at characterization of novel, high performing force transducers to be employed into monitoring systems with very high accuracy. Each force transducer comprises of a coherently designed mechanical transducer and a position microsensor with very high accuracy. The range of operation for the mechanical transducer has been optimized to fit the 500μm travel range of the position microsensor. Respectively, the flexures' stiffness corresponds to achieve the maximum displacement at 70N load force. The position microsensor is a MEMS device, comprising of two rigid elements: an anchored and an actuated ones connected via one monolithic micro-flexure. Additionally, the micro-flexure comprises of two strain detecting cantilevers having four sidewall embedded piezoresistors connected in a Wheatstone bridge. The particular sensor provides a voltage signal having sensitivity in the range of 240μV/μm at 1V DC voltage supply. The experimental set-up for measurement of the load curve of the force transducer has demonstrated an overall force resolution of about 0.6mN. As a result, more than 100,000 scale intervals have been experimentally assessed. The present work forms development of a common approach for accurate measurement of various physical values, when they are transduced in a multi-D displacement. Due to the demonstrated high accuracy, the force transducers with piezoresistive MEMS sensors remove most of the constraints in force monitoring with ppm-accuracy.
NASA Astrophysics Data System (ADS)
Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard
2017-10-01
In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1991-01-01
A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.
Research of autonomous celestial navigation based on new measurement model of stellar refraction
NASA Astrophysics Data System (ADS)
Yu, Cong; Tian, Hong; Zhang, Hui; Xu, Bo
2014-09-01
Autonomous celestial navigation based on stellar refraction has attracted widespread attention for its high accuracy and full autonomy.In this navigation method, establishment of accurate stellar refraction measurement model is the fundament and key issue to achieve high accuracy navigation. However, the existing measurement models are limited due to the uncertainty of atmospheric parameters. Temperature, pressure and other factors which affect the stellar refraction within the height of earth's stratosphere are researched, and the varying model of atmosphere with altitude is derived on the basis of standard atmospheric data. Furthermore, a novel measurement model of stellar refraction in a continuous range of altitudes from 20 km to 50 km is produced by modifying the fixed altitude (25 km) measurement model, and equation of state with the orbit perturbations is established, then a simulation is performed using the improved Extended Kalman Filter. The results show that the new model improves the navigation accuracy, which has a certain practical application value.
Motion direction estimation based on active RFID with changing environment
NASA Astrophysics Data System (ADS)
Jie, Wu; Minghua, Zhu; Wei, He
2018-05-01
The gate system is used to estimate the direction of RFID tags carriers when they are going through the gate. Normally, it is difficult to achieve and keep a high accuracy in estimating motion direction of RFID tags because the received signal strength of tag changes sharply according to the changing electromagnetic environment. In this paper, a method of motion direction estimation for RFID tags is presented. To improve estimation accuracy, the machine leaning algorithm is used to get the fitting function of the received data by readers which are deployed inside and outside gate respectively. Then the fitted data are sampled to get the standard vector. We compare the stand vector with template vectors to get the motion direction estimation result. Then the corresponding template vector is updated according to the surrounding environment. We conducted the simulation and implement of the proposed method and the result shows that the proposed method in this work can improve and keep a high accuracy under the condition of the constantly changing environment.
Ma, Xin; Guo, Jing; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
Small arms mini-fire control system: fiber-optic barrel deflection sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajic, Slobodan; Datskos, Panos G
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight,more » weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.« less
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Osher, Stanley; Jerome, Joseph
1991-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou
2015-09-21
Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
OPTOTRAK: at last a system with resolution of 10 μm (Abstract Only)
NASA Astrophysics Data System (ADS)
Crouch, David G.; Kehl, L.; Krist, J. R.
1990-08-01
Northern Digital's first active marker point measurement system, the WATSMART, was begun in 1983. Development ended in 1985 with the manufacture of a highly accurate system, which achieved .15 to .25 mm accuracies in three dimensions within a .75-meter cube. Further improvements in accuracy were rendered meaningless, and a great obstacle to usability was presented by a surplus light problem somewhat incorrectly known as "the reflection problem". In 1985, development of a new system to overcome "the reflection problem" was begun. The advantages and disadvantages involved in the use of active versus passive markers were considered. The implications of using a CCD device as the imaging element in a precision measurement device were analyzed, as were device characteristics such as dynamic range, peak readout noise and charge transfer efficiency. A new type of lens was also designed The end result, in 1988, was the first OPTOTRAK system. This system produces three-dimensional data in real-time and is not at all affected by reflections. Accuracies of 30 microns have been achieved in a 1-meter volume. Each two-dimensional camera actually has two separate, one-dimensional, CCD elements and two separate anamorphic lenses. It can locate a point from 1-8 meters away with a resolution of 1 part in 64,000 and an accuracy of 1 part in 20,000 over the field of view.
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-01-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-10-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.
NASA Technical Reports Server (NTRS)
Freedman, Adam; Hensley, Scott; Chapin, Elaine; Kroger, Peter; Hussain, Mushtaq; Allred, Bruce
1999-01-01
GeoSAR is an airborne, interferometric Synthetic Aperture Radar (IFSAR) system for terrain mapping, currently under development by a consortium including NASA's Jet Propulsion Laboratory (JPL), Calgis, Inc., a California mapping sciences company, and the California Department of Conservation (CaIDOC), with funding provided by the U.S. Army Corps of Engineers Topographic Engineering Center (TEC) and the U.S. Defense Advanced Research Projects Agency (DARPA). IFSAR data processing requires high-accuracy platform position and attitude knowledge. On 9 GeoSAR, these are provided by one or two Honeywell Embedded GPS Inertial Navigation Units (EGI) and an Ashtech Z12 GPS receiver. The EGIs provide real-time high-accuracy attitude and moderate-accuracy position data, while the Ashtech data, post-processed differentially with data from a nearby ground station using Ashtech PNAV software, provide high-accuracy differential GPS positions. These data are optimally combined using a Kalman filter within the GeoSAR motion measurement software, and the resultant position and orientation information are used to process the dual frequency (X-band and P-band) radar data to generate high-accuracy, high -resolution terrain imagery and digital elevation models (DEMs). GeoSAR requirements specify sub-meter level planimetric and vertical accuracies for the resultant DEMS. To achieve this, platform positioning errors well below one meter are needed. The goal of GeoSAR is to obtain 25 cm or better 3-D positions from the GPS systems on board the aircraft. By imaging a set of known point target corner-cube reflectors, the GeoSAR system can be calibrated. This calibration process yields the true position of the aircraft with an uncertainty of 20- 50 cm. This process thus allows an independent assessment of the accuracy of our GPS-based positioning systems. We will present an overview of the GeoSAR motion measurement system, focusing on the use of GPS and the blending of position data from the various systems. We will present the results of our calibration studies that relate to the accuracy the GPS positioning. We will discuss the effects these positioning, errors have on the resultant DEM products and imagery.
Comparison of modal identification techniques using a hybrid-data approach
NASA Technical Reports Server (NTRS)
Pappa, Richard S.
1986-01-01
Modal identification of seemingly simple structures, such as the generic truss is often surprisingly difficult in practice due to high modal density, nonlinearities, and other nonideal factors. Under these circumstances, different data analysis techniques can generate substantially different results. The initial application of a new hybrid-data method for studying the performance characteristics of various identification techniques with such data is summarized. This approach offers new pieces of information for the system identification researcher. First, it allows actual experimental data to be used in the studies, while maintaining the traditional advantage of using simulated data. That is, the identification technique under study is forced to cope with the complexities of real data, yet the performance can be measured unquestionably for the artificial modes because their true parameters are known. Secondly, the accuracy achieved for the true structural modes in the data can be estimated from the accuracy achieved for the artificial modes if the results show similar characteristics. This similarity occurred in the study, for example, for a weak structural mode near 56 Hz. It may even be possible--eventually--to use the error information from the artificial modes to improve the identification accuracy for the structural modes.
NASA Astrophysics Data System (ADS)
Jiang, Jiamin; Younis, Rami M.
2017-06-01
The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Mohammed, Ameer; Zamani, Majid; Bayford, Richard; Demosthenous, Andreas
2017-12-01
In Parkinson's disease (PD), on-demand deep brain stimulation is required so that stimulation is regulated to reduce side effects resulting from continuous stimulation and PD exacerbation due to untimely stimulation. Also, the progressive nature of PD necessitates the use of dynamic detection schemes that can track the nonlinearities in PD. This paper proposes the use of dynamic feature extraction and dynamic pattern classification to achieve dynamic PD detection taking into account the demand for high accuracy, low computation, and real-time detection. The dynamic feature extraction and dynamic pattern classification are selected by evaluating a subset of feature extraction, dimensionality reduction, and classification algorithms that have been used in brain-machine interfaces. A novel dimensionality reduction technique, the maximum ratio method (MRM) is proposed, which provides the most efficient performance. In terms of accuracy and complexity for hardware implementation, a combination having discrete wavelet transform for feature extraction, MRM for dimensionality reduction, and dynamic k-nearest neighbor for classification was chosen as the most efficient. It achieves a classification accuracy of 99.29%, an F1-score of 97.90%, and a choice probability of 99.86%.
Chen, Hu; Yang, Xu; Chen, Litong; Wang, Yong; Sun, Yuchun
2016-01-01
The objective was to establish and evaluate a method for manufacture of custom trays for edentulous jaws using computer aided design and fused deposition modeling (FDM) technologies. A digital method for design the custom trays for edentulous jaws was established. The tissue surface data of ten standard mandibular edentulous plaster models, which was used to design the digital custom tray in a reverse engineering software, were obtained using a 3D scanner. The designed tray was printed by a 3D FDM printing device. Another ten hand-made custom trays were produced as control. The 3-dimentional surface data of models and custom trays was scanned to evaluate the accuracy of reserved impression space, while the difference between digitally made trays and hand-made trays were analyzed. The digitally made custom trays achieved a good matching with the mandibular model, showing higher accuracy than the hand-made ones. There was no significant difference of the reserved space between different models and its matched digitally made trays. With 3D scanning, CAD and FDM technology, an efficient method of custom tray production was established, which achieved a high reproducibility and accuracy. PMID:26763620
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
Bhat, Somanath; Polanowski, Andrea M; Double, Mike C; Jarman, Simon N; Emslie, Kerry R
2012-01-01
Recent advances in nanofluidic technologies have enabled the use of Integrated Fluidic Circuits (IFCs) for high-throughput Single Nucleotide Polymorphism (SNP) genotyping (GT). In this study, we implemented and validated a relatively low cost nanofluidic system for SNP-GT with and without Specific Target Amplification (STA). As proof of principle, we first validated the effect of input DNA copy number on genotype call rate using well characterised, digital PCR (dPCR) quantified human genomic DNA samples and then implemented the validated method to genotype 45 SNPs in the humpback whale, Megaptera novaeangliae, nuclear genome. When STA was not incorporated, for a homozygous human DNA sample, reaction chambers containing, on average 9 to 97 copies, showed 100% call rate and accuracy. Below 9 copies, the call rate decreased, and at one copy it was 40%. For a heterozygous human DNA sample, the call rate decreased from 100% to 21% when predicted copies per reaction chamber decreased from 38 copies to one copy. The tightness of genotype clusters on a scatter plot also decreased. In contrast, when the same samples were subjected to STA prior to genotyping a call rate and a call accuracy of 100% were achieved. Our results demonstrate that low input DNA copy number affects the quality of data generated, in particular for a heterozygous sample. Similar to human genomic DNA, a call rate and a call accuracy of 100% was achieved with whale genomic DNA samples following multiplex STA using either 15 or 45 SNP-GT assays. These calls were 100% concordant with their true genotypes determined by an independent method, suggesting that the nanofluidic system is a reliable platform for executing call rates with high accuracy and concordance in genomic sequences derived from biological tissue.
Rule-driven defect detection in CT images of hardwood logs
Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt
2000-01-01
This paper deals with automated detection and identification of internal defects in hardwood logs using computed tomography (CT) images. We have developed a system that employs artificial neural networks to perform tentative classification of logs on a pixel-by-pixel basis. This approach achieves a high level of classification accuracy for several hardwood species (...
NASA Technical Reports Server (NTRS)
Narukage, Noriyuki; Kano, Ryohei; Bando, Takamasa; Ishikawa, Ryoko; Kubo, Masahito; Katsukawa, Yukio; Ishikawa, Shinnosuke; Hara, Hiroshi; Suematsu, Yoshinori; Giono, Gabriel;
2015-01-01
We are planning an international rocket experiment Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is (2015 planned) that Lyman a line (Ly(alpha) line) polarization spectroscopic observations from the sun. The purpose of this experiment, detected with high accuracy of the linear polarization of the Ly(alpha) lines to 0.1% by using a Hanle effect is to measure the magnetic field of the chromosphere-transition layer directly. For polarization photometric accuracy achieved that approx. 0.1% required for CLASP, it is necessary to realize the monitoring device with a high throughput. On the other hand, Ly(alpha) line (vacuum ultraviolet rays) have a sensitive characteristics that is absorbed by the material. We therefore set the optical system of the reflection system (transmission only the wavelength plate), each of the mirrors, subjected to high efficiency of the multilayer coating in accordance with the role. Primary mirror diameter of CLASP is about 30 cm, the amount of heat about 30,000 J is about 5 minutes of observation time is coming mainly in the visible light to the telescope. In addition, total flux of the sun visible light overwhelmingly large and about 200 000 times the Ly(alpha) line wavelength region. Therefore, in terms of thermal management and 0.1% of the photometric measurement accuracy achieved telescope, elimination of the visible light is essential. We therefore, has a high reflectivity (> 50%) in Lya line, visible light is a multilayer coating be kept to a low reflectance (<5%) (cold mirror coating) was applied to the primary mirror. On the other hand, the efficiency of the polarization analyzer required chromospheric magnetic field measurement (the amount of light) Conventional (magnesium fluoride has long been known as a material for vacuum ultraviolet (MgF2) manufactured ellipsometer; Rs = 22%) about increased to 2.5 times were high efficiency reflective polarizing element analysis. This device, Bridou et al. (2011) is proposed "that is coated with a thin film of the substrate MgF2 and SiO2 fused silica." As a result of the measurement, Rs = 54.5%, to achieve a Rp = 0.3%, high efficiency, of course, capable of taking out only about s-polarized light. Other reflective optical elements (the secondary mirror, the diffraction gratingcollector mirror), subjected to high-reflection coating of Al + MgF2 (reflectance of about 80%), less than 5% in the entire optical system by these (CCD Science was achieved a high throughput as a device for a vacuum ultraviolet ray of the entire system less than 5% (CCD of QE is not included).
Fast-PPP assessment in European and equatorial region near the solar cycle maximum
NASA Astrophysics Data System (ADS)
Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume
2014-05-01
The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.
Evaluation of a Home Biomonitoring Autonomous Mobile Robot.
Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei
2016-01-01
Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.
Two high accuracy digital integrators for Rogowski current transducers.
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
Two high accuracy digital integrators for Rogowski current transducers
NASA Astrophysics Data System (ADS)
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
Solving Nonlinear Euler Equations with Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2005-01-01
A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
"Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines
ERIC Educational Resources Information Center
Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken
2011-01-01
Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a…
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
Detection of Dendritic Spines Using Wavelet Packet Entropy and Fuzzy Support Vector Machine.
Wang, Shuihua; Li, Yang; Shao, Ying; Cattani, Carlo; Zhang, Yudong; Du, Sidan
2017-01-01
The morphology of dendritic spines is highly correlated with the neuron function. Therefore, it is of positive influence for the research of the dendritic spines. However, it is tried to manually label the spine types for statistical analysis. In this work, we proposed an approach based on the combination of wavelet contour analysis for the backbone detection, wavelet packet entropy, and fuzzy support vector machine for the spine classification. The experiments show that this approach is promising. The average detection accuracy of "MushRoom" achieves 97.3%, "Stubby" achieves 94.6%, and "Thin" achieves 97.2%. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A high-voltage supply used on miniaturized RLG
NASA Astrophysics Data System (ADS)
Miao, Zhifei; Fan, Mingming; Wang, Yuepeng; Yin, Yan; Wang, Dongmei
2016-01-01
A high voltage power supply used in laser gyro is proposed in this paper. The power supply which uses a single DC 15v input and fly-back topology is adopted in the main circuit. The output of the power supply achieve high to 3.3kv voltage in order to light the RLG. The PFM control method is adopted to realize the rapid switching between the high voltage state and the maintain state. The resonant chip L6565 is used to achieve the zero voltage switching(ZVS), so the consumption is reduced and the power efficiency is improved more than 80%. A special circuit is presented in the control portion to ensure symmetry of the two RLG's arms current. The measured current accuracy is higher than 5‰ and the current symmetry of the two RLG's arms up to 99.2%.
Kim, Eun Young; Lee, Min Young; Kim, Se Hyun; Ha, Kyooseob; Kim, Kwang Pyo; Ahn, Yong Min
2017-06-02
Major depressive disorder (MDD) is a systemic and multifactorial disorder that involves abnormalities in multiple biochemical pathways and the autonomic nervous system. This study applied a machine-learning method to classify MDD and control groups by incorporating data from serum proteomic analysis and heart rate variability (HRV) analysis for the identification of novel peripheral biomarkers. The study subjects consisted of 25 drug-free female MDD patients and 25 age- and sex-matched healthy controls. First, quantitative serum proteome profiles were analyzed by liquid chromatography-tandem mass spectrometry using pooled serum samples from 10 patients and 10 controls. Next, candidate proteins were quantified with multiple reaction monitoring (MRM) in 50 subjects. We also analyzed 22 linear and nonlinear HRV parameters in 50 subjects. Finally, we identified a combined biomarker panel consisting of proteins and HRV indexes using a support vector machine with recursive feature elimination. A separation between MDD and control groups was achieved using five parameters (apolipoprotein B, group-specific component, ceruloplasmin, RMSSD, and SampEn) at 80.1% classification accuracy. A combination of HRV and proteomic data achieved better classification accuracy. A high classification accuracy can be achieved by combining multimodal information from heart rate dynamics and serum proteomics in MDD. Our approach can be helpful for accurate clinical diagnosis of MDD. Further studies using larger, independent cohorts are needed to verify the role of these candidate biomarkers for MDD diagnosis. Copyright © 2017 Elsevier Inc. All rights reserved.
Turbine blade tip clearance measurements using skewed dual optical beams of tip timing
NASA Astrophysics Data System (ADS)
Ye, De-chao; Duan, Fa-jie; Guo, Hao-tian; Li, Yangzong; Wang, Kai
2011-12-01
Optimization and active control of the clearance between turbine blades and case of the engine is identified, especially in aerospace community, as a key technology to increase engine efficiency, reduce fuel consumption and emissions and increase service life .However, the tip clearance varies during different operating conditions. Thus a reliable non-contact and online detection system is essential and ultimately used to close the tip clearance control loop. This paper described a fiber optical clearance measuring system applying skewed dual optical beams to detect the traverse time of passing blades. Two beams were specially designed with an outward angle of 18 degree and the beam spot diameters are less than 100μm within 0-4mm working range to achieve high signal-to-noise and high sensitivity. It could be theoretically analyzed that the measuring accuracy is not compromised by degradation of signal intensity caused by any number of environmental conditions such as light source instability, contamination and blade tip imperfection. Experimental tests were undertaken to achieve a high resolution of 10µm in the rotational speed range 2000-18000RPM and a measurement accuracy of 15μm, indicating that the system is capable of providing accurate and reliable data for active clearance control (ACC).
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)
2000-01-01
In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).
NASA Astrophysics Data System (ADS)
Rutkowski, Lucile; Masłowski, Piotr; Johansson, Alexandra C.; Khodabakhsh, Amir; Foltynowicz, Aleksandra
2018-01-01
Broadband precision spectroscopy is indispensable for providing high fidelity molecular parameters for spectroscopic databases. We have recently shown that mechanical Fourier transform spectrometers based on optical frequency combs can measure broadband high-resolution molecular spectra undistorted by the instrumental line shape (ILS) and with a highly precise frequency scale provided by the comb. The accurate measurement of the power of the comb modes interacting with the molecular sample was achieved by acquiring single-burst interferograms with nominal resolution matched to the comb mode spacing. Here we describe in detail the experimental and numerical steps needed to achieve sub-nominal resolution and retrieve ILS-free molecular spectra, i.e. with ILS-induced distortion below the noise level. We investigate the accuracy of the transition line centers retrieved by fitting to the absorption lines measured using this method. We verify the performance by measuring an ILS-free cavity-enhanced low-pressure spectrum of the 3ν1 + ν3 band of CO2 around 1575 nm with line widths narrower than the nominal resolution. We observe and quantify collisional narrowing of absorption line shape, for the first time with a comb-based spectroscopic technique. Thus retrieval of line shape parameters with accuracy not limited by the Voigt profile is now possible for entire absorption bands acquired simultaneously.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
Multiple-reflection time-of-flight mass spectrometry for in situ applications
NASA Astrophysics Data System (ADS)
Dickel, T.; Plaß, W. R.; Lang, J.; Ebert, J.; Geissel, H.; Haettner, E.; Jesch, C.; Lippert, W.; Petrick, M.; Scheidenberger, C.; Yavor, M. I.
2013-12-01
Multiple-reflection time-of-flight mass spectrometers (MR-TOF-MS) have recently been installed at different low-energy radioactive ion beam facilities. They are used as isobar separators with high ion capacity and as mass spectrometers with high mass resolving power and accuracy for short-lived nuclei. Furthermore, MR-TOF-MS have a huge potential for applications in other fields, such as chemistry, biology, medicine, space science, and homeland security. The development, commissioning and results of an MR-TOF-MS is presented, which serves as proof-of-principle to show that very high mass resolving powers (∼105) can be achieved in a compact device (length ∼30 cm). Based on this work, an MR-TOF-MS for in situ application has been designed. For the first time, this device combines very high mass resolving power (>105), mobility, and an atmospheric pressure inlet in one instrument. It will enable in situ measurements without sample preparation at very high mass accuracy. Envisaged applications of this mobile MR-TOF-MS are discussed.
NASA Astrophysics Data System (ADS)
Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing
2017-09-01
The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.
Error-proofing test system of industrial components based on image processing
NASA Astrophysics Data System (ADS)
Huang, Ying; Huang, Tao
2018-05-01
Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.
Image enhancement and advanced information extraction techniques for ERTS-1 data
NASA Technical Reports Server (NTRS)
Malila, W. A. (Principal Investigator); Nalepka, R. F.; Sarno, J. E.
1975-01-01
The author has identified the following significant results. It was demonstrated and concluded that: (1) the atmosphere has significant effects on ERTS MSS data which can seriously degrade recognition performance; (2) the application of selected signature extension techniques serve to reduce the deleterious effects of both the atmosphere and changing ground conditions on recognition performance; and (3) a proportion estimation algorithm for overcoming problems in acreage estimation accuracy resulting from the coarse spatial resolution of the ERTS MSS, was able to significantly improve acreage estimation accuracy over that achievable by conventional techniques, especially for high contrast targets such as lakes and ponds.
Identification of serial number on bank card using recurrent neural network
NASA Astrophysics Data System (ADS)
Liu, Li; Huang, Linlin; Xue, Jian
2018-04-01
Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.
NASA Astrophysics Data System (ADS)
Li, Haohan; Wu, Yong; Zeng, Xiaojun; Wang, Xiaohan; Zhao, Daiqing
2017-06-01
Thermophysical properties, such as density, specific heat, viscosity and thermal conductivity, vary sharply near critical point. To evaluate these properties of hydrocarbons accurately is crucial to the further research of fuel system. Comparison was made by the calculating program based on four widely used equations of state (EoS), and results indicated that calculations based on the Peng-Robinson (PR) equation of state achieve better prediction accuracy among the four equations of state. Due to the small computational amount and high accuracy, the evaluation method proposed in this paper can be implemented into practical application for the design of fuel system.
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
NASA Astrophysics Data System (ADS)
Hassan, Mahmoud A.
2004-02-01
Digital elevation models (DEMs) are important tools in the planning, design and maintenance of mobile communication networks. This research paper proposes a method for generating high accuracy DEMs based on SPOT satellite 1A stereo pair images, ground control points (GCP) and Erdas OrthoBASE Pro image processing software. DEMs with 0.2911 m mean error were achieved for the hilly and heavily populated city of Amman. The generated DEM was used to design a mobile communication network resulted in a minimum number of radio base transceiver stations, maximum number of covered regions and less than 2% of dead zones.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
NASA Astrophysics Data System (ADS)
Zhang, Xu; Li, Yun; Chen, Xiang; Li, Guanglin; Zev Rymer, William; Zhou, Ping
2013-08-01
Objective. This study investigates the effect of the involuntary motor activity of paretic-spastic muscles on the classification of surface electromyography (EMG) signals. Approach. Two data collection sessions were designed for 8 stroke subjects to voluntarily perform 11 functional movements using their affected forearm and hand at relatively slow and fast speeds. For each stroke subject, the degree of involuntary motor activity present in the voluntary surface EMG recordings was qualitatively described from such slow and fast experimental protocols. Myoelectric pattern recognition analysis was performed using different combinations of voluntary surface EMG data recorded from the slow and fast sessions. Main results. Across all tested stroke subjects, our results revealed that when involuntary surface EMG is absent or present in both the training and testing datasets, high accuracies (>96%, >98%, respectively, averaged over all the subjects) can be achieved in the classification of different movements using surface EMG signals from paretic muscles. When involuntary surface EMG was solely involved in either the training or testing datasets, the classification accuracies were dramatically reduced (<89%, <85%, respectively). However, if both the training and testing datasets contained EMG signals with the presence and absence of involuntary EMG interference, high accuracies were still achieved (>97%). Significance. The findings of this study can be used to guide the appropriate design and implementation of myoelectric pattern recognition based systems or devices toward promoting robot-aided therapy for stroke rehabilitation.
Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control
NASA Astrophysics Data System (ADS)
Cledat, E.; Cucci, D. A.
2017-08-01
The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.
Accuracy and Calibration of High Explosive Thermodynamic Equations of State
NASA Astrophysics Data System (ADS)
Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack
2010-10-01
The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.
High Power Laser Processing Of Materials
NASA Astrophysics Data System (ADS)
Martyr, D. R.; Holt, T.
1987-09-01
The first practical demonstration of a laser device was in 1960 and in the following years, the high power carbon dioxide laser has matured as an industrial machine tool. Modern carbon dioxide gas lasers can be used for cutting, welding, heat treatment, drilling, scribing and marking. Since their invention over 25 years ago they are now becoming recognised as highly reliable devices capable of achieving huge savings in production costs in many situations. This paper introduces the basic laser processing techniques of cutting, welding and heat treatment as they apply to the most common engineering materials. Typical processing speeds achieved with a wide range of laser powers are reported. Accuracies achievable and fit-up tolerances required are presented. Methods of integrating lasers with machine tools are described and their suitability in a wide range of manufacturing industries is described by reference to recent installations. Examples from small batch manufacturing, high volume production using dedicated laser welding equipment, and high volume manufacturing using 'flexible' automated laser welding equipment are described Future applications of laser processing are suggested by reference to current process developments.
Accuracy evaluation of 3D lidar data from small UAV
NASA Astrophysics Data System (ADS)
Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav
2015-10-01
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks
NASA Technical Reports Server (NTRS)
Kotov, Dmitry Vladimirovich; Yee, Helen M C.
2013-01-01
The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.
Speed, Dissipation, and Accuracy in Early T-cell Recognition
NASA Astrophysics Data System (ADS)
Cui, Wenping; Mehta, Pankaj
In the immune system, T cells can perform self-foreign discrimination with great foreign ligand sensitivity, high decision speed and low energy cost. There is significant evidence T-cells achieve such great performance with a mechanism: kinetic proofreading(KPR). KPR-based mechanisms actively consume energy to increase the specificity of T-cell recognition. An important theoretical question arises: how to understand trade-offs and fundamental limits on accuracy, speed, and dissipation (energy consumption). Recent theoretical work suggests that it is always possible to reduce the the error of KPR-based mechanisms by waiting longer and/or consuming more energy. Surprisingly, we find that this is not the case and that there actually exists an optimal point in the speed-energy-accuracy plane for KPR and its generalizations. This work was supported by NIH R35 and Simons MMLS Grant.
NASA Astrophysics Data System (ADS)
Call, Mitchell; Schulz, Kai G.; Carvalho, Matheus C.; Santos, Isaac R.; Maher, Damien T.
2017-03-01
A new approach to autonomously determine concentrations of dissolved inorganic carbon (DIC) and its carbon stable isotope ratio (δ13C-DIC) at high temporal resolution is presented. The simple method requires no customised design. Instead it uses two commercially available instruments currently used in aquatic carbon research. An inorganic carbon analyser utilising non-dispersive infrared detection (NDIR) is coupled to a Cavity Ring-down Spectrometer (CRDS) to determine DIC and δ13C-DIC based on the liberated CO2 from acidified aliquots of water. Using a small sample volume of 2 mL, the precision and accuracy of the new method was comparable to standard isotope ratio mass spectrometry (IRMS) methods. The system achieved a sampling resolution of 16 min, with a DIC precision of ±1.5 to 2 µmol kg-1 and δ13C-DIC precision of ±0.14 ‰ for concentrations spanning 1000 to 3600 µmol kg-1. Accuracy of 0.1 ± 0.06 ‰ for δ13C-DIC based on DIC concentrations ranging from 2000 to 2230 µmol kg-1 was achieved during a laboratory-based algal bloom experiment. The high precision data that can be autonomously obtained by the system should enable complex carbonate system questions to be explored in aquatic sciences using high-temporal-resolution observations.
van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed
2017-01-01
Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.
Adequacy of Using a Three-Item Questionnaire to Determine Zygosity in Chinese Young Twins.
Ho, Connie Suk-Han; Zheng, Mo; Chow, Bonnie Wing-Yin; Wong, Simpson W L; Lim, Cadmon K P; Waye, Mary M Y
2017-03-01
The present study examined the adequacy of a three-item parent questionnaire in determining the zygosity of young Chinese twins and whether there was any association between parent response accuracy and some demographic variables. The sample consisted of 334 pairs of same-sex Chinese twins aged from 3 to 11 years. Three scoring methods, namely the summed score, logistic regression, and decision tree, were employed to evaluate parent response accuracy of twin zygosity based on single nucleotide polymorphism (SNP) information. The results showed that all three methods achieved high level of accuracy ranging from 91 to 93 % which was comparable to the accuracy rates in previous Chinese twin studies. Correlation results also showed that the higher the parents' education level or the family income was, the more likely parents were able to tell correctly that their twins are identical or fraternal. The present findings confirmed the validity of using a three-item parent questionnaire to determine twin zygosity in a Chinese school-aged twin sample.
NASA Astrophysics Data System (ADS)
Dou, P.
2017-12-01
Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).
Variability of Diabetes Alert Dog Accuracy in a Real-World Setting
Gonder-Frederick, Linda A.; Grabman, Jesse H.; Shepard, Jaclyn A.; Tripathi, Anand V.; Ducar, Dallas M.; McElgunn, Zachary R.
2017-01-01
Background: Diabetes alert dogs (DADs) are growing in popularity as an alternative method of glucose monitoring for individuals with type 1 diabetes (T1D). Only a few empirical studies have assessed DAD accuracy, with inconsistent results. The present study examined DAD accuracy and variability in performance in real-world conditions using a convenience sample of owner-report diaries. Method: Eighteen DAD owners (44.4% female; 77.8% youth) with T1D completed diaries of DAD alerts during the first year after placement. Diary entries included daily BG readings and DAD alerts. For each DAD, percentage hits (alert with BG ≤ 5.0 or ≥ 11.1 mmol/L; ≤90 or ≥200 mg/dl), percentage misses (no alert with BG out of range), and percentage false alarms (alert with BG in range) were computed. Sensitivity, specificity, positive likelihood ratio (PLR), and true positive rates were also calculated. Results: Overall comparison of DAD Hits to Misses yielded significantly more Hits for both low and high BG. Total sensitivity was 57.0%, with increased sensitivity to low BG (59.2%) compared to high BG (56.1%). Total specificity was 49.3% and PLR = 1.12. However, high variability in accuracy was observed across DADs, with low BG sensitivity ranging from 33% to 100%. Number of DADs achieving ≥ 60%, 65% and 70% true positive rates was 71%, 50% and 44%, respectively. Conclusions: DADs may be able to detect out-of-range BG, but variability across DADs is evident. Larger trials are needed to further assess DAD accuracy and to identify factors influencing the complexity of DAD accuracy in BG detection. PMID:28627305
The TOPEX satellite option study
NASA Technical Reports Server (NTRS)
1982-01-01
The applicability of an existing spacecraft bus and subsystems to the requirements of ocean circulation measurements are assessed. The operational meteorological satellite family TIROS and DMSP are recommended. These programs utilize a common bus to satisfy their Earth observation missions. Note that although the instrument complements were different, the pointing accuracies were different, and, initially, the boosters were different, a high degree of commonality was achieved.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
2004-01-01
A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.
Enabling Technologies for High-accuracy Multiangle Spectropolarimetric Imaging from Space
NASA Technical Reports Server (NTRS)
Diner, David J.; Macenka, Steven A.; Seshndri, Suresh; Bruce, Carl E; Jau, Bruno; Chipman, Russell A.; Cairns, Brian; Christoph, Keller; Foo, Leslie D.
2004-01-01
Satellite remote sensing plays a major role in measuring the optical and radiative properties, environmental impact, and spatial and temporal distribution of tropospheric aerosols. In this paper, we envision a new generation of spaceborne imager that integrates the unique strengths of multispectral, multiangle, and polarimetric approaches, thereby achieving better accuracies in aerosol optical depth and particle properties than can be achieved using any one method by itself. Design goals include spectral coverage from the near-UV to the shortwave infrared; global coverage within a few days; intensity and polarimetric imaging simultaneously at multiple view angles; kilometer to sub-kilometer spatial resolution; and measurement of the degree of linear polarization for a subset of the spectral complement with an uncertainty of 0.5% or less. The latter requirement is technically the most challenging. In particular, an approach for dealing with inter-detector gain variations is essential to avoid false polarization signals. We propose using rapid modulation of the input polarization state to overcome this problem, using a high-speed variable retarder in the camera design. Technologies for rapid retardance modulation include mechanically rotating retarders, liquid crystals, and photoelastic modulators (PEMs). We conclude that the latter are the most suitable.
Limits on the Accuracy of Linking. Research Report. ETS RR-10-22
ERIC Educational Resources Information Center
Haberman, Shelby J.
2010-01-01
Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…
The Credibility of Children's Testimony: Can Children Control the Accuracy of Their Memory Reports?
ERIC Educational Resources Information Center
Koriat, Asher; Goldsmith, Morris; Schneider, Wolfgang; Nakash-Dura, Michal
2001-01-01
Three experiments examined children's strategic regulation of memory accuracy. Found that younger (7 to 9 years) and older (10 to 12 years) children could enhance the accuracy of their testimony by screening out wrong answers under free-report conditions. Findings suggest a developmental trend in level of memory accuracy actually achieved.…
X-ray focusing with efficient high-NA multilayer Laue lenses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bajt, Sasa; Prasciolu, Mauro; Fleckenstein, Holger
Multilayer Laue lenses are volume diffraction elements for the efficient focusing of X-rays. With a new manufacturing technique that we introduced, it is possible to fabricate lenses of sufficiently high numerical aperture (NA) to achieve focal spot sizes below 10 nm. The alternating layers of the materials that form the lens must span a broad range of thicknesses on the nanometer scale to achieve the necessary range of X-ray deflection angles required to achieve a high NA. This poses a challenge to both the accuracy of the deposition process and the control of the materials properties, which often vary withmore » layer thickness. We introduced a new pair of materials—tungsten carbide and silicon carbide—to prepare layered structures with smooth and sharp interfaces and with no material phase transitions that hampered the manufacture of previous lenses. Using a pair of multilayer Laue lenses (MLLs) fabricated from this system, we achieved a two-dimensional focus of 8.4 × 6.8 nm 2 at a photon energy of 16.3 keV with high diffraction efficiency and demonstrated scanning-based imaging of samples with a resolution well below 10 nm. The high NA also allowed projection holographic imaging with strong phase contrast over a large range of magnifications. Furthermore, an error analysis indicates the possibility of achieving 1 nm focusing.« less
X-ray focusing with efficient high-NA multilayer Laue lenses
Bajt, Sasa; Prasciolu, Mauro; Fleckenstein, Holger; ...
2018-03-23
Multilayer Laue lenses are volume diffraction elements for the efficient focusing of X-rays. With a new manufacturing technique that we introduced, it is possible to fabricate lenses of sufficiently high numerical aperture (NA) to achieve focal spot sizes below 10 nm. The alternating layers of the materials that form the lens must span a broad range of thicknesses on the nanometer scale to achieve the necessary range of X-ray deflection angles required to achieve a high NA. This poses a challenge to both the accuracy of the deposition process and the control of the materials properties, which often vary withmore » layer thickness. We introduced a new pair of materials—tungsten carbide and silicon carbide—to prepare layered structures with smooth and sharp interfaces and with no material phase transitions that hampered the manufacture of previous lenses. Using a pair of multilayer Laue lenses (MLLs) fabricated from this system, we achieved a two-dimensional focus of 8.4 × 6.8 nm 2 at a photon energy of 16.3 keV with high diffraction efficiency and demonstrated scanning-based imaging of samples with a resolution well below 10 nm. The high NA also allowed projection holographic imaging with strong phase contrast over a large range of magnifications. Furthermore, an error analysis indicates the possibility of achieving 1 nm focusing.« less
Park, Jinhee; Javier, Rios Jesus; Moon, Taesup; Kim, Youngwook
2016-11-24
Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost.
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
A new optical head tracing reflected light for nanoprofiler
NASA Astrophysics Data System (ADS)
Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.
2014-09-01
High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.
A gimbaled low noise momentum wheel
NASA Technical Reports Server (NTRS)
Bichler, U.; Eckardt, T.
1993-01-01
The bus actuators are the heart and at the same time the Achilles' heel of accurate spacecraft stabilization systems, because both their performance and their perturbations can have a deciding influence on the achievable pointing accuracy of the mission. The main task of the attitude actuators, which are mostly wheels, is the generation of useful torques with sufficiently high bandwidth, resolution and accuracy. This is because the bandwidth of the whole attitude control loop and its disturbance rejection capability is dependent upon these factors. These useful torques shall be provided, without - as far as possible - parasitic noise like unbalance forces and torques and harmonics. This is because such variable frequency perturbations excite structural resonances which in turn disturb the operation of sensors and scientific instruments. High accuracy spacecraft will further require bus actuators for the three linear degrees of freedom (DOF) to damp structural oscillations excited by various sources. These actuators have to cover the dynamic range of these disturbances. Another interesting feature, which is not necessarily related to low noise performance, is a gimballing capability which enables, in a certain angular range, a three axis attitude control with only one wheel. The herein presented Teldix MWX, a five degree of freedom Magnetic Bearing Momentum Wheel, incorporates all the above required features. It is ideally suited to support, as a gyroscopic actuator in the attitude control system, all High Pointing Accuracy and Vibration Sensitive space missions.
NASA Astrophysics Data System (ADS)
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.
Pembleton, Luke W; Inch, Courtney; Baillie, Rebecca C; Drayton, Michelle C; Thakur, Preeti; Ogaji, Yvonne O; Spangenberg, German C; Forster, John W; Daetwyler, Hans D; Cogan, Noel O I
2018-06-02
Exploitation of data from a ryegrass breeding program has enabled rapid development and implementation of genomic selection for sward-based biomass yield with a twofold-to-threefold increase in genetic gain. Genomic selection, which uses genome-wide sequence polymorphism data and quantitative genetics techniques to predict plant performance, has large potential for the improvement in pasture plants. Major factors influencing the accuracy of genomic selection include the size of reference populations, trait heritability values and the genetic diversity of breeding populations. Global diversity of the important forage species perennial ryegrass is high and so would require a large reference population in order to achieve moderate accuracies of genomic selection. However, diversity of germplasm within a breeding program is likely to be lower. In addition, de novo construction and characterisation of reference populations are a logistically complex process. Consequently, historical phenotypic records for seasonal biomass yield and heading date over a 18-year period within a commercial perennial ryegrass breeding program have been accessed, and target populations have been characterised with a high-density transcriptome-based genotyping-by-sequencing assay. Ability to predict observed phenotypic performance in each successive year was assessed by using all synthetic populations from previous years as a reference population. Moderate and high accuracies were achieved for the two traits, respectively, consistent with broad-sense heritability values. The present study represents the first demonstration and validation of genomic selection for seasonal biomass yield within a diverse commercial breeding program across multiple years. These results, supported by previous simulation studies, demonstrate the ability to predict sward-based phenotypic performance early in the process of individual plant selection, so shortening the breeding cycle, increasing the rate of genetic gain and allowing rapid adoption in ryegrass improvement programs.
NASA Astrophysics Data System (ADS)
Li, Xiaohui; Yang, Sibo; Fan, Rongwei; Yu, Xin; Chen, Deying
2018-06-01
In this paper, discrimination of soft tissues using laser-induced breakdown spectroscopy (LIBS) in combination with multivariate statistical methods is presented. Fresh pork fat, skin, ham, loin and tenderloin muscle tissues are manually cut into slices and ablated using a 1064 nm pulsed Nd:YAG laser. Discrimination analyses between fat, skin and muscle tissues, and further between highly similar ham, loin and tenderloin muscle tissues, are performed based on the LIBS spectra in combination with multivariate statistical methods, including principal component analysis (PCA), k nearest neighbors (kNN) classification, and support vector machine (SVM) classification. Performances of the discrimination models, including accuracy, sensitivity and specificity, are evaluated using 10-fold cross validation. The classification models are optimized to achieve best discrimination performances. The fat, skin and muscle tissues can be definitely discriminated using both kNN and SVM classifiers, with accuracy of over 99.83%, sensitivity of over 0.995 and specificity of over 0.998. The highly similar ham, loin and tenderloin muscle tissues can also be discriminated with acceptable performances. The best performances are achieved with SVM classifier using Gaussian kernel function, with accuracy of 76.84%, sensitivity of over 0.742 and specificity of over 0.869. The results show that the LIBS technique assisted with multivariate statistical methods could be a powerful tool for online discrimination of soft tissues, even for tissues of high similarity, such as muscles from different parts of the animal body. This technique could be used for discrimination of tissues suffering minor clinical changes, thus may advance the diagnosis of early lesions and abnormalities.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
Border-oriented post-processing refinement on detected vehicle bounding box for ADAS
NASA Astrophysics Data System (ADS)
Chen, Xinyuan; Zhang, Zhaoning; Li, Minne; Li, Dongsheng
2018-04-01
We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.
Nearest neighbor 3D segmentation with context features
NASA Astrophysics Data System (ADS)
Hristova, Evelin; Schulz, Heinrich; Brosch, Tom; Heinrich, Mattias P.; Nickisch, Hannes
2018-03-01
Automated and fast multi-label segmentation of medical images is challenging and clinically important. This paper builds upon a supervised machine learning framework that uses training data sets with dense organ annotations and vantage point trees to classify voxels in unseen images based on similarity of binary feature vectors extracted from the data. Without explicit model knowledge, the algorithm is applicable to different modalities and organs, and achieves high accuracy. The method is successfully tested on 70 abdominal CT and 42 pelvic MR images. With respect to ground truth, an average Dice overlap score of 0.76 for the CT segmentation of liver, spleen and kidneys is achieved. The mean score for the MR delineation of bladder, bones, prostate and rectum is 0.65. Additionally, we benchmark several variations of the main components of the method and reduce the computation time by up to 47% without significant loss of accuracy. The segmentation results are - for a nearest neighbor method - surprisingly accurate, robust as well as data and time efficient.
Aldridge, R Benjamin; Glodzik, Dominik; Ballerini, Lucia; Fisher, Robert B; Rees, Jonathan L
2011-05-01
Non-analytical reasoning is thought to play a key role in dermatology diagnosis. Considering its potential importance, surprisingly little work has been done to research whether similar identification processes can be supported in non-experts. We describe here a prototype diagnostic support software, which we have used to examine the ability of medical students (at the beginning and end of a dermatology attachment) and lay volunteers, to diagnose 12 images of common skin lesions. Overall, the non-experts using the software had a diagnostic accuracy of 98% (923/936) compared with 33% for the control group (215/648) (Wilcoxon p < 0.0001). We have demonstrated, within the constraints of a simplified clinical model, that novices' diagnostic scores are significantly increased by the use of a structured image database coupled with matching of index and referent images. The novices achieve this high degree of accuracy without any use of explicit definitions of likeness or rule-based strategies.
A method of object recognition for single pixel imaging
NASA Astrophysics Data System (ADS)
Li, Boxuan; Zhang, Wenwen
2018-01-01
Computational ghost imaging(CGI), utilizing a single-pixel detector, has been extensively used in many fields. However, in order to achieve a high-quality reconstructed image, a large number of iterations are needed, which limits the flexibility of using CGI in practical situations, especially in the field of object recognition. In this paper, we purpose a method utilizing the feature matching to identify the number objects. In the given system, approximately 90% of accuracy of recognition rates can be achieved, which provides a new idea for the application of single pixel imaging in the field of object recognition
The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.
Wixted, John T; Wells, Gary L
2017-05-01
The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).
Li, Chuang; Cordovilla, Francisco; Jagdheesh, R.
2018-01-01
This paper presents a novel structural piezoresistive pressure sensor with four-grooved membrane combined with rood beam to measure low pressure. In this investigation, the design, optimization, fabrication, and measurements of the sensor are involved. By analyzing the stress distribution and deflection of sensitive elements using finite element method, a novel structure featuring high concentrated stress profile (HCSP) and locally stiffened membrane (LSM) is built. Curve fittings of the mechanical stress and deflection based on FEM simulation results are performed to establish the relationship between mechanical performance and structure dimension. A combination of FEM and curve fitting method is carried out to determine the structural dimensions. The optimized sensor chip is fabricated on a SOI wafer by traditional MEMS bulk-micromachining and anodic bonding technology. When the applied pressure is 1 psi, the sensor achieves a sensitivity of 30.9 mV/V/psi, a pressure nonlinearity of 0.21% FSS and an accuracy of 0.30%, and thereby the contradiction between sensitivity and linearity is alleviated. In terms of size, accuracy and high temperature characteristic, the proposed sensor is a proper choice for measuring pressure of less than 1 psi. PMID:29393916
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu
2016-10-01
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Detection of artificially ripened mango using spectrometric analysis
NASA Astrophysics Data System (ADS)
Mithun, B. S.; Mondal, Milton; Vishwakarma, Harsh; Shinde, Sujit; Kimbahune, Sanjay
2017-05-01
Hyperspectral sensing has been proven to be useful to determine the quality of food in general. It has also been used to distinguish naturally and artificially ripened mangoes by analyzing the spectral signature. However the focus has been on improving the accuracy of classification after performing dimensionality reduction, optimum feature selection and using suitable learning algorithm on the complete visible and NIR spectrum range data, namely 350nm to 1050nm. In this paper we focus on, (i) the use of low wavelength resolution and low cost multispectral sensor to reliably identify artificially ripened mango by selectively using the spectral information so that classification accuracy is not hampered at the cost of low resolution spectral data and (ii) use of visible spectrum i.e. 390nm to 700 nm data to accurately discriminate artificially ripened mangoes. Our results show that on a low resolution spectral data, the use of logistic regression produces an accuracy of 98.83% and outperforms other methods like classification tree, random forest significantly. And this is achieved by analyzing only 36 spectral reflectance data points instead of the complete 216 data points available in visual and NIR range. Another interesting experimental observation is that we are able to achieve more than 98% classification accuracy by selecting only 15 irradiance values in the visible spectrum. Even the number of data needs to be collected using hyper-spectral or multi-spectral sensor can be reduced by a factor of 24 for classification with high degree of confidence
Fundamental Techniques for High Photon Energy Stability of a Modern Soft X-ray Beamline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senba, Yasunori; Kishimoto, Hikaru; Miura, Takanori
2007-01-19
High energy resolution and high energy stability are required for modern soft x-ray beamlines. Attempts at improving the energy stability are presented in this paper. Some measures have been adopted to avoid energy instability. It is clearly observed that the unstable temperature of the support frame of the optical elements results in photon energy instability. A photon energy stability of 10 meV for half a day is achieved by controlling the temperature with an accuracy of 0.01 deg. C.
Characterization of fiber Bragg grating-based sensor array for high resolution manometry
NASA Astrophysics Data System (ADS)
Becker, Martin; Rothhardt, Manfred; Schröder, Kerstin; Voigt, Sebastian; Mehner, Jan; Teubner, Andreas; Lüpke, Thomas; Thieroff, Christoph; Krüger, Matthias; Chojetzki, Christoph; Bartelt, Hartmut
2012-04-01
The combination of fiber Bragg grating arrays integrated in a soft plastic tube is promising for high resolution manometry (HRM) where pressure measurements are done with high spatial resolution. The application as a medical device and in vivo experiments have to be anticipated by characterization with a measurement setup that simulates natural conditions. Good results are achieved with a pressure chamber which applies a well-defined pressure with a soft tubular membrane. It is shown that the proposed catheter design reaches accuracies down to 1 mbar and 1 cm.
Strategies for high-precision Global Positioning System orbit determination
NASA Technical Reports Server (NTRS)
Lichten, Stephen M.; Border, James S.
1987-01-01
Various strategies for the high-precision orbit determination of the GPS satellites are explored using data from the 1985 GPS field test. Several refinements to the orbit determination strategies were found to be crucial for achieving high levels of repeatability and accuracy. These include the fine tuning of the GPS solar radiation coefficients and the ground station zenith tropospheric delays. Multiday arcs of 3-6 days provided better orbits and baselines than the 8-hr arcs from single-day passes. Highest-quality orbits and baselines were obtained with combined carrier phase and pseudorange solutions.
Tu, Chengjian; Li, Jun; Sheng, Quanhu; Zhang, Ming; Qu, Jun
2014-04-04
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R(2) > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery.
2015-01-01
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R2 > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery. PMID:24635752
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.
Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J
2018-06-12
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
FPGA-Based Smart Sensor for Online Displacement Measurements Using a Heterodyne Interferometer
Vera-Salas, Luis Alberto; Moreno-Tapia, Sandra Veronica; Garcia-Perez, Arturo; de Jesus Romero-Troncoso, Rene; Osornio-Rios, Roque Alfredo; Serroukh, Ibrahim; Cabal-Yepez, Eduardo
2011-01-01
The measurement of small displacements on the nanometric scale demands metrological systems of high accuracy and precision. In this context, interferometer-based displacement measurements have become the main tools used for traceable dimensional metrology. The different industrial applications in which small displacement measurements are employed requires the use of online measurements, high speed processes, open architecture control systems, as well as good adaptability to specific process conditions. The main contribution of this work is the development of a smart sensor for large displacement measurement based on phase measurement which achieves high accuracy and resolution, designed to be used with a commercial heterodyne interferometer. The system is based on a low-cost Field Programmable Gate Array (FPGA) allowing the integration of several functions in a single portable device. This system is optimal for high speed applications where online measurement is needed and the reconfigurability feature allows the addition of different modules for error compensation, as might be required by a specific application. PMID:22164040
Further experiments for mean velocity profile of pipe flow at high Reynolds number
NASA Astrophysics Data System (ADS)
Furuichi, N.; Terao, Y.; Wada, Y.; Tsuji, Y.
2018-05-01
This paper reports further experimental results obtained in high Reynolds number actual flow facility in Japan. The experiments were performed in a pipe flow with water, and the friction Reynolds number was varied up to Reτ = 5.3 × 104. This high Reynolds number was achieved by using water as the working fluid and adopting a large-diameter pipe (387 mm) while controlling the flow rate and temperature with high accuracy and precision. The streamwise velocity was measured by laser Doppler velocimetry close to the wall, and the mean velocity profile, called log-law profile U+ = (1/κ) ln(y+) + B, is especially focused. After careful verification of the mean velocity profiles in terms of the flow rate accuracy and an evaluation of the consistency of the present results with those from previously measurements in a smaller pipe (100 mm), it was found that the value of κ asymptotically approaches a constant value of κ = 0.384.
The ship edge feature detection based on high and low threshold for remote sensing image
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Shengyang
2018-05-01
In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.
2013-12-13
8 U.S. Army Field Artillery Operations ............................................................................ 8 Geodesy ...Experts in this field of study have a full working knowledge of geodesy and the theory that allows mensuration to surpass the level of accuracy achieved...desired. (2) Fire that is intended to achieve the desired result on target.”6 Geodesy : “that branch of applied mathematics which determines by observation
New high order schemes in BATS-R-US
NASA Astrophysics Data System (ADS)
Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.
2013-12-01
The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi
2015-02-01
In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node
NASA Astrophysics Data System (ADS)
Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi
2015-03-01
In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.
A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhuang, Yu
1997-01-01
In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
Demodulation algorithm for optical fiber F-P sensor.
Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan
2017-09-10
The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5 nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.
A Neuro-Fuzzy Approach in the Classification of Students' Academic Performance
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions. PMID:24302928
Bayesian network modelling of upper gastrointestinal bleeding
NASA Astrophysics Data System (ADS)
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
A neuro-fuzzy approach in the classification of students' academic performance.
Do, Quang Hung; Chen, Jeng-Fung
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions.
Features and technologies of ERS-1 (ESA) and X-SAR antennas
NASA Technical Reports Server (NTRS)
Schuessler, R.; Wagner, R.
1986-01-01
Features and technologies of planar waveguide array antennas developed for spaceborne microwave sensors are described. Such antennas are made from carbon fiber reinforced plastic (CFRP) employing special manufacturing and metallization techniques to achieve satisfactory electrical properties. Mechanical design enables deployable antenna structures necessary for satellite applications (e.g., ESA ERS-1). The slotted waveguide concept provides high aperture efficiency, good beamshaping capabilities, and low losses. These CFRP waveguide antennas feature low mass, high accuracy and stiffness, and can be operated within wide temperature ranges.
Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana
2018-05-24
In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.
NASA Astrophysics Data System (ADS)
Minkov, D. A.; Gavrilov, G. M.; Moreno, J. M. D.; Vazquez, C. G.; Marquez, E.
2017-03-01
The accuracy of the popular graphical method of Swanepoel (SGM) for the characterization of a thin film on a substrate specimen from its interference transmittance spectrum depends on the subjective choice of four characterization parameters: the slope of the graph, the order number for the longest wavelength extremum, and the two numbers of the extrema used for the calculation approximations of the average film thickness. Here, an error metric is introduced for estimating the accuracy of SGM characterization. An algorithm is proposed for the optimization of SGM, named the OGM algorithm, based on the minimization of this error metric. Its execution provides optimized values of the four characterization parameters, and the respective computation of the most accurate film characteristics achievable within the framework of SGM. Moreover, substrate absorption is accounted for, unlike in the classical SGM, which is beneficial when using modern UV/visible/NIR spectrophotometers due to the relatively larger amount of absorption in the commonly used glass substrates for wavelengths above 1700 nm. A significant increase in the accuracy of the film characteristics is obtained employing the OGM algorithm compared to the SGM algorithm for two model specimens. Such improvements in accuracy increase with increasing film absorption. The results of the film characterization by the OGM algorithm are presented for two specimens containing RF-magnetron-sputtered a-Si films with disparate film thicknesses. The computed average film thicknesses are within 1.1% of the respective film thicknesses measured by SEM for both films. Achieving such high film characterization accuracy is particularly significant for the film with a computed average thickness of 3934 nm, since we are not aware of any other film with such a large thickness that has been characterized by SGM.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Automatic liver segmentation on Computed Tomography using random walkers for treatment planning
Moghbel, Mehrdad; Mashohor, Syamsiah; Mahmud, Rozi; Saripan, M. Iqbal Bin
2016-01-01
Segmentation of the liver from Computed Tomography (CT) volumes plays an important role during the choice of treatment strategies for liver diseases. Despite lots of attention, liver segmentation remains a challenging task due to the lack of visible edges on most boundaries of the liver coupled with high variability of both intensity patterns and anatomical appearances with all these difficulties becoming more prominent in pathological livers. To achieve a more accurate segmentation, a random walker based framework is proposed that can segment contrast-enhanced livers CT images with great accuracy and speed. Based on the location of the right lung lobe, the liver dome is automatically detected thus eliminating the need for manual initialization. The computational requirements are further minimized utilizing rib-caged area segmentation, the liver is then extracted by utilizing random walker method. The proposed method was able to achieve one of the highest accuracies reported in the literature against a mixed healthy and pathological liver dataset compared to other segmentation methods with an overlap error of 4.47 % and dice similarity coefficient of 0.94 while it showed exceptional accuracy on segmenting the pathological livers with an overlap error of 5.95 % and dice similarity coefficient of 0.91. PMID:28096782
A Hybrid Brain-Computer Interface Based on the Fusion of P300 and SSVEP Scores.
Yin, Erwei; Zeyl, Timothy; Saab, Rami; Chau, Tom; Hu, Dewen; Zhou, Zongtan
2015-07-01
The present study proposes a hybrid brain-computer interface (BCI) with 64 selectable items based on the fusion of P300 and steady-state visually evoked potential (SSVEP) brain signals. With this approach, row/column (RC) P300 and two-step SSVEP paradigms were integrated to create two hybrid paradigms, which we denote as the double RC (DRC) and 4-D spellers. In each hybrid paradigm, the target is simultaneously detected based on both P300 and SSVEP potentials as measured by the electroencephalogram. We further proposed a maximum-probability estimation (MPE) fusion approach to combine the P300 and SSVEP on a score level and compared this approach to other approaches based on linear discriminant analysis, a naïve Bayes classifier, and support vector machines. The experimental results obtained from thirteen participants indicated that the 4-D hybrid paradigm outperformed the DRC paradigm and that the MPE fusion achieved higher accuracy compared with the other approaches. Importantly, 12 of the 13 participants, using the 4-D paradigm achieved an accuracy of over 90% and the average accuracy was 95.18%. These promising results suggest that the proposed hybrid BCI system could be used in the design of a high-performance BCI-based keyboard.
Next-generation pushbroom filter radiometers for remote sensing
NASA Astrophysics Data System (ADS)
Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.
2012-09-01
Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.
Engineering of Machine tool’s High-precision electric drives
NASA Astrophysics Data System (ADS)
Khayatov, E. S.; Korzhavin, M. E.; Naumovich, N. I.
2018-03-01
In the article it is shown that in mechanisms with numerical program control, high quality of processes can be achieved only in systems that provide adjustment of the working element’s position with high accuracy, and this requires an expansion of the regulation range by the torque. In particular, the use of synchronous reactive machines with independent excitation control makes it possible to substantially increase the moment overload in the sequential excitation circuit. Using mathematical and physical modeling methods, it is shown that in the electric drive with a synchronous reactive machine with independent excitation in a circuit with sequential excitation, it is possible to significantly expand the range of regulation by the torque and this is achieved by the effect of sequential excitation, which makes it possible to compensate for the transverse reaction of the armature.
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun
2017-01-01
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow
NASA Astrophysics Data System (ADS)
Moran, Kenneth J.; Beran, Philip S.
1994-07-01
Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.
Achieving accuracy in first-principles calculations for EOS: basis completeness at high temperatures
NASA Astrophysics Data System (ADS)
Wills, John; Mattsson, Ann
2013-06-01
First-principles electronic structure calculations can provide EOS data in regimes of pressure and temperature where accurate experimental data is difficult or impossible to obtain. This lack, however, also precludes validation of calculations in those regimes. Factors that influence the accuracy of first-principles data include (1) theoretical approximations and (2) computational approximations used in implementing and solving the underlying equations. In the first category are the approximate exchange/correlation functionals and approximate wave equations approximating the Dirac equation; in the second are basis completeness, series convergence, and truncation errors. We are using two rather different electronic structure methods (VASP and RSPt) to make definitive the requirements for accuracy of the second type, common to both. In this talk, we discuss requirements for converged calculation at high temperature and moderated pressure. At convergence we show that both methods give identical results. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Midbond basis functions for weakly bound complexes
NASA Astrophysics Data System (ADS)
Shaw, Robert A.; Hill, J. Grant
2018-06-01
Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Deep coupling of star tracker and MEMS-gyro data under highly dynamic and long exposure conditions
NASA Astrophysics Data System (ADS)
Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin
2014-08-01
Star trackers and gyroscopes are the two most widely used attitude measurement devices in spacecrafts. The star tracker is supposed to have the highest accuracy in stable conditions among different types of attitude measurement devices. In general, to detect faint stars and reduce the size of the star tracker, a method with long exposure time method is usually used. Thus, under dynamic conditions, smearing of the star image may appear and result in decreased accuracy or even failed extraction of the star spot. This may cause inaccuracies in attitude measurement. Gyros have relatively good dynamic performance and are usually used in combination with star trackers. However, current combination methods focus mainly on the data fusion of the output attitude data levels, which are inadequate for utilizing and processing internal blurred star image information. A method for tracking deep coupling stars and MEMS-gyro data is proposed in this work. The method achieves deep fusion at the star image level. First, dynamic star image processing is performed based on the angular velocity information of the MEMS-gyro. Signal-to-noise ratio (SNR) of the star spot could be improved, and extraction is achieved more effectively. Then, a prediction model for optimal estimation of the star spot position is obtained through the MEMS-gyro, and an extended Kalman filter is introduced. Meanwhile, the MEMS-gyro drift can be estimated and compensated though the proposed method. These enable the star tracker to achieve high star centroid determination accuracy under dynamic conditions. The MEMS-gyro drift can be corrected even when attitude data of the star tracker are unable to be solved and only one navigation star is captured in the field of view. Laboratory experiments were performed to verify the effectiveness of the proposed method and the whole system.
Zhang, Wei; Ma, Hong; Yang, Simon X.
2016-01-01
In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products. PMID:26999161
Zhang, Wei; Ma, Hong; Yang, Simon X
2016-03-18
In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Astrophysics Data System (ADS)
Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang
2017-12-01
The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.
Calibration Method of an Ultrasonic System for Temperature Measurement
Zhou, Chao; Wang, Yueke; Qiao, Chunjie; Dai, Weihua
2016-01-01
System calibration is fundamental to the overall accuracy of the ultrasonic temperature measurement, and it is basically involved in accurately measuring the path length and the system latency of the ultrasonic system. This paper proposes a method of high accuracy system calibration. By estimating the time delay between the transmitted signal and the received signal at several different temperatures, the calibration equations are constructed, and the calibrated results are determined with the use of the least squares algorithm. The formulas are deduced for calculating the calibration uncertainties, and the possible influential factors are analyzed. The experimental results in distilled water show that the calibrated path length and system latency can achieve uncertainties of 0.058 mm and 0.038 μs, respectively, and the temperature accuracy is significantly improved by using the calibrated results. The temperature error remains within ±0.04°C consistently, and the percentage error is less than 0.15%. PMID:27788252
Paraskevaidi, Maria; Morais, Camilo L M; Lima, Kássio M G; Ashton, Katherine M; Stringfellow, Helen F; Martin-Hirsch, Pierre L; Martin, Francis L
2018-06-07
The current lack of an accurate, cost-effective and non-invasive test that would allow for screening and diagnosis of gynaecological carcinomas, such as endometrial and ovarian cancer, signals the necessity for alternative approaches. The potential of spectroscopic techniques in disease investigation and diagnosis has been previously demonstrated. Here, we used attenuated total reflection Fourier-transform infrared (ATR-FTIR) spectroscopy to analyse urine samples from women with endometrial (n = 10) and ovarian cancer (n = 10), as well as from healthy individuals (n = 10). After applying multivariate analysis and classification algorithms, biomarkers of disease were pointed out and high levels of accuracy were achieved for both endometrial (95% sensitivity, 100% specificity; accuracy: 95%) and ovarian cancer (100% sensitivity, 96.3% specificity; accuracy 100%). The efficacy of this approach, in combination with the non-invasive method for urine collection, suggest a potential diagnostic tool for endometrial and ovarian cancers.
Muñoz-Ruiz, Miguel Ángel; Hartikainen, Päivi; Koikkalainen, Juha; Wolz, Robin; Julkunen, Valtteri; Niskanen, Eini; Herukka, Sanna-Kaisa; Kivipelto, Miia; Vanninen, Ritva; Rueckert, Daniel; Liu, Yawu; Lötjönen, Jyrki; Soininen, Hilkka
2012-01-01
Background MRI is an important clinical tool for diagnosing dementia-like diseases such as Frontemporal Dementia (FTD). However there is a need to develop more accurate and standardized MRI analysis methods. Objective To compare FTD with Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) with three automatic MRI analysis methods - Hippocampal Volumetry (HV), Tensor-based Morphometry (TBM) and Voxel-based Morphometry (VBM), in specific regions of interest in order to determine the highest classification accuracy. Methods Thirty-seven patients with FTD, 46 patients with AD, 26 control subjects, 16 patients with progressive MCI (PMCI) and 48 patients with stable MCI (SMCI) were examined with HV, TBM for shape change, and VBM for gray matter density. We calculated the Correct Classification Rate (CCR), sensitivity (SS) and specificity (SP) between the study groups. Results We found unequivocal results differentiating controls from FTD with HV (hippocampus left side) (CCR = 0.83; SS = 0.84; SP = 0.80), with TBM (hippocampus and amygdala (CCR = 0.80/SS = 0.71/SP = 0.94), and with VBM (all the regions studied, especially in lateral ventricle frontal horn, central part and occipital horn) (CCR = 0.87/SS = 0.81/SP = 0.96). VBM achieved the highest accuracy in differentiating AD and FTD (CCR = 0.72/SS = 0.67/SP = 0.76), particularly in lateral ventricle (frontal horn, central part and occipital horn) (CCR = 0.73), whereas TBM in superior frontal gyrus also achieved a high accuracy (CCR = 0.71/SS = 0.68/SP = 0.73). TBM resulted in low accuracy (CCR = 0.62) in the differentiation of AD from FTD using all regions of interest, with similar results for HV (CCR = 0.55). Conclusion Hippocampal atrophy is present not only in AD but also in FTD. Of the methods used, VBM achieved the highest accuracy in its ability to differentiate between FTD and AD. PMID:23285078
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
Roy, Jean-Sébastien; Braën, Caroline; Leblond, Jean; Desmeules, François; Dionne, Clermont E; MacDermid, Joy C; Bureau, Nathalie J; Frémont, Pierre
2015-01-01
Background Different diagnostic imaging modalities, such as ultrasonography (US), MRI, MR arthrography (MRA) are commonly used for the characterisation of rotator cuff (RC) disorders. Since the most recent systematic reviews on medical imaging, multiple diagnostic studies have been published, most using more advanced technological characteristics. The first objective was to perform a meta-analysis on the diagnostic accuracy of medical imaging for characterisation of RC disorders. Since US is used at the point of care in environments such as sports medicine, a secondary analysis assessed accuracy by radiologists and non-radiologists. Methods A systematic search in three databases was conducted. Two raters performed data extraction and evaluation of risk of bias independently, and agreement was achieved by consensus. Hierarchical summary receiver-operating characteristic package was used to calculate pooled estimates of included diagnostic studies. Results Diagnostic accuracy of US, MRI and MRA in the characterisation of full-thickness RC tears was high with overall estimates of sensitivity and specificity over 0.90. As for partial RC tears and tendinopathy, overall estimates of specificity were also high (>0.90), while sensitivity was lower (0.67–0.83). Diagnostic accuracy of US was similar whether a trained radiologist, sonographer or orthopaedist performed it. Conclusions Our results show the diagnostic accuracy of US, MRI and MRA in the characterisation of full-thickness RC tears. Since full thickness tear constitutes a key consideration for surgical repair, this is an important characteristic when selecting an imaging modality for RC disorder. When considering accuracy, cost, and safety, US is the best option. PMID:25677796
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
NASA Astrophysics Data System (ADS)
Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.
2009-02-01
The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.
NASA Astrophysics Data System (ADS)
Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad
2016-01-01
In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.
ERIC Educational Resources Information Center
Caldwell, Stacy Lynette
2010-01-01
Students served in juvenile correctional school settings often arrive with histories of trauma, aversive educational experiences, low achievement, and other severe risk factors that impeded psychosocial development, educational progress, and occupational outcomes. Schools serving adjudicated youth must address a higher percentage of severe…
Inelastic Transitions in Slow Collisions of Anti-Hydrogen with Hydrogen Atoms
NASA Astrophysics Data System (ADS)
Harrison, Robert; Krstic, Predrag
2007-06-01
We calculate excited adiabatic states and nonadiabatic coupling matrix elements of a quasimolecular system containing hydrogen and anti-hydrogen atoms, for a range of internuclear distances from 0.2 to 20 Bohrs. High accuracy is achieved by exact diagonalization of the molecular Hamiltionian in a large Gaussian basis. Nonadiabatic dynamics was calculated by solving MOCC equations. Positronium states are included in the consideration.
Computation of free oscillations of the earth
Buland, Raymond P.; Gilbert, F.
1984-01-01
Although free oscillations of the Earth may be computed by many different methods, numerous practical considerations have led us to use a Rayleigh-Ritz formulation with piecewise cubic Hermite spline basis functions. By treating the resulting banded matrix equation as a generalized algebraic eigenvalue problem, we are able to achieve great accuracy and generality and a high degree of automation at a reasonable cost. ?? 1984.
Bending stiffness of catheters and guide wires.
Wünsche, P; Werner, C; Bloss, P
2002-01-01
An important property of catheters and guide wires to assess their pushability behavior is their bending stiffness. To measure bending stiffness, a new bending module with a new clamping device was developed. This module can easily be mounted in commercially available tensile testing equipment, where bending force and deflection due to the bending force can be measured. To achieve high accuracy for the bending stiffness, the bending distance has to be measured with even higher accuracy by using a laser-scan micrometer. Measurement results of angiographic catheters and guide wires were presented and discussed. The bending stiffness shows a significant dependence on the angle of the test specimen's rotation around its length axis.
Processing techniques development, volume 3
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Anuta, P. E.; Hixson, M. M.; Swain, P. H.
1978-01-01
The author has identified the following significant results. Analysis of the geometric characteristics of the aircraft synthetic aperture radar (SAR) relative to LANDSAT indicated that relatively low order polynominals would model the distortions to subpixel accuracy to bring SAR into registration for good quality imagery. Also the area analyzed was small, about 10 miles square, so this is an additional constraint. For the Air Force/ERIM data, none of the tested methods could achieve subpixel accuracy. Reasons for this is unknown; however, the noisy (high scintillation) nature of the data and attendent unrecognizability of features contribute to this error. It is concluded that the quadratic model would adequately provide distortion modeling for small areas, i.e., 10 to 20 miles square.
The limits of direct satellite tracking with the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Bertiger, W. I.; Yunck, T. P.
1988-01-01
Recent advances in high precision differential Global Positioning System-based satellite tracking can be applied to the more conventional direct tracking of low earth satellites. To properly evaluate the limiting accuracy of direct GPS-based tracking, it is necessary to account for the correlations between the a-priori errors in GPS states, Y-bias, and solar pressure parameters. These can be obtained by careful analysis of the GPS orbit determination process. The analysis indicates that sub-meter accuracy can be readily achieved for a user above 1000 km altitude, even when the user solution is obtained with data taken 12 hours after the data used in the GPS orbit solutions.
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Majid, Walid
2009-01-01
At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.
Error-proneness as a handicap signal.
De Jaegher, Kris
2003-09-21
This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.
Autonomous satellite navigation using starlight refraction angle measurements
NASA Astrophysics Data System (ADS)
Ning, Xiaolin; Wang, Longhua; Bai, Xinbei; Fang, Jiancheng
2013-05-01
An on-board autonomous navigation capability is required to reduce the operation costs and enhance the navigation performance of future satellites. Autonomous navigation by stellar refraction is a type of autonomous celestial navigation method that uses high-accuracy star sensors instead of Earth sensors to provide information regarding Earth's horizon. In previous studies, the refraction apparent height has typically been used for such navigation. However, the apparent height cannot be measured directly by a star sensor and can only be calculated by the refraction angle and an atmospheric refraction model. Therefore, additional errors are introduced by the uncertainty and nonlinearity of atmospheric refraction models, which result in reduced navigation accuracy and reliability. A new navigation method based on the direct measurement of the refraction angle is proposed to solve this problem. Techniques for the determination of the refraction angle are introduced, and a measurement model for the refraction angle is established. The method is tested and validated by simulations. When the starlight refraction height ranges from 20 to 50 km, a positioning accuracy of better than 100 m can be achieved for a low-Earth-orbit (LEO) satellite using the refraction angle, while the positioning accuracy of the traditional method using the apparent height is worse than 500 m under the same conditions. Furthermore, an analysis of the factors that affect navigation accuracy, including the measurement accuracy of the refraction angle, the number of visible refracted stars per orbit and the installation azimuth of star sensor, is presented. This method is highly recommended for small satellites in particular, as no additional hardware besides two star sensors is required.
Huang, Yuansheng; Yang, Zhirong; Wang, Jing; Zhuo, Lin; Li, Zhixia; Zhan, Siyan
2016-05-06
To compare the performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from The Cochrane Library. Databases of CDSR and DARE in the Cochrane Library were searched for systematic reviews of diagnostic test accuracy published between 2008 and 2012 through nine search strategies. Each strategy consists of one group or combination of groups of searching filters about diagnostic test accuracy. Four groups of diagnostic filters were used. The Strategy combing all the filters was used as the reference to determine the sensitivity, precision, and the sensitivity x precision product for another eight Strategies. The reference Strategy retrieved 8029 records, of which 832 were eligible. The strategy only composed of MeSH terms about "accuracy measures" achieved the highest values in both precision (69.71%) and product (52.45%) with a moderate sensitivity (75.24%). The combination of MeSH terms and free text words about "accuracy measures" contributed little to increasing the sensitivity. Strategies composed of filters about "diagnosis" had similar sensitivity but lower precision and product to those composed of filters about "accuracy measures". MeSH term "exp'diagnosis' " achieved the lowest precision (9.78%) and product (7.91%), while its hyponym retrieved only half the number of records at the expense of missing 53 target articles. The precision was negatively correlated with sensitivities among the nine strategies. Compared to the filters about "diagnosis", the filters about "accuracy measures" achieved similar sensitivities but higher precision. When combining both terms, sensitivity of the strategy was enhanced obviously. The combination of MeSH terms and free text words about the same concept seemed to be meaningless for enhancing sensitivity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.
Qi, Jun; Liu, Guo-Ping
2017-11-06
This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.
Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun; Yoo, Eun Sang
2012-05-01
Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI.
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Di-codon Usage for Gene Classification
NASA Astrophysics Data System (ADS)
Nguyen, Minh N.; Ma, Jianmin; Fogel, Gary B.; Rajapakse, Jagath C.
Classification of genes into biologically related groups facilitates inference of their functions. Codon usage bias has been described previously as a potential feature for gene classification. In this paper, we demonstrate that di-codon usage can further improve classification of genes. By using both codon and di-codon features, we achieve near perfect accuracies for the classification of HLA molecules into major classes and sub-classes. The method is illustrated on 1,841 HLA sequences which are classified into two major classes, HLA-I and HLA-II. Major classes are further classified into sub-groups. A binary SVM using di-codon usage patterns achieved 99.95% accuracy in the classification of HLA genes into major HLA classes; and multi-class SVM achieved accuracy rates of 99.82% and 99.03% for sub-class classification of HLA-I and HLA-II genes, respectively. Furthermore, by combining codon and di-codon usages, the prediction accuracies reached 100%, 99.82%, and 99.84% for HLA major class classification, and for sub-class classification of HLA-I and HLA-II genes, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezzacappa, Anthony; Endeve, Eirik; Hauck, Cory D.
We extend the positivity-preserving method of Zhang & Shu [49] to simulate the advection of neutral particles in phase space using curvilinear coordinates. The ability to utilize these coordinates is important for non-equilibrium transport problems in general relativity and also in science and engineering applications with specific geometries. The method achieves high-order accuracy using Discontinuous Galerkin (DG) discretization of phase space and strong stabilitypreserving, Runge-Kutta (SSP-RK) time integration. Special care in taken to ensure that the method preserves strict bounds for the phase space distribution function f; i.e., f ϵ [0, 1]. The combination of suitable CFL conditions and themore » use of the high-order limiter proposed in [49] is su cient to ensure positivity of the distribution function. However, to ensure that the distribution function satisfies the upper bound, the discretization must, in addition, preserve the divergencefree property of the phase space ow. Proofs that highlight the necessary conditions are presented for general curvilinear coordinates, and the details of these conditions are worked out for some commonly used coordinate systems (i.e., spherical polar spatial coordinates in spherical symmetry and cylindrical spatial coordinates in axial symmetry, both with spherical momentum coordinates). Results from numerical experiments - including one example in spherical symmetry adopting the Schwarzschild metric - demonstrate that the method achieves high-order accuracy and that the distribution function satisfies the maximum principle.« less
Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won
2013-09-27
Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.
NASA Astrophysics Data System (ADS)
Kocyigit, Ilker; Liu, Hongyu; Sun, Hongpeng
2013-04-01
In this paper, we consider invisibility cloaking via the transformation optics approach through a ‘blow-up’ construction. An ideal cloak makes use of singular cloaking material. ‘Blow-up-a-small-region’ construction and ‘truncation-of-singularity’ construction are introduced to avoid the singular structure, however, giving only near-cloaks. The study in the literature is to develop various mechanisms in order to achieve high-accuracy approximate near-cloaking devices, and also from a practical viewpoint to nearly cloak an arbitrary content. We study the problem from a different viewpoint. It is shown that for those regularized cloaking devices, the corresponding scattering wave fields due to an incident plane wave have regular patterns. The regular patterns are both a curse and a blessing. On the one hand, the regular wave pattern betrays the location of a cloaking device which is an intrinsic defect due to the ‘blow-up’ construction, and this is particularly the case for the construction by employing a high-loss layer lining. Indeed, our numerical experiments show robust reconstructions of the location, even by implementing the phaseless cross-section data. The construction by employing a high-density layer lining shows a certain promising feature. On the other hand, it is shown that one can introduce an internal point source to produce the canceling scattering pattern to achieve a near-cloak of an arbitrary order of accuracy.
Wavelet data compression for archiving high-resolution icosahedral model data
NASA Astrophysics Data System (ADS)
Wang, N.; Bao, J.; Lee, J.
2011-12-01
With the increase of the resolution of global circulation models, it becomes ever more important to develop highly effective solutions to archive the huge datasets produced by those models. While lossless data compression guarantees the accuracy of the restored data, it can only achieve limited reduction of data size. Wavelet transform based data compression offers significant potentials in data size reduction, and it has been shown very effective in transmitting data for remote visualizations. However, for data archive purposes, a detailed study has to be conducted to evaluate its impact to the datasets that will be used in further numerical computations. In this study, we carried out two sets of experiments for both summer and winter seasons. An icosahedral grid weather model and a highly efficient wavelet data compression software were used for this study. Initial conditions were compressed and input to the model to run to 10 days. The forecast results were then compared to those forecast results from the model run with the original uncompressed initial conditions. Several visual comparisons, as well as the statistics of numerical comparisons are presented. These results indicate that with specified minimum accuracy losses, wavelet data compression achieves significant data size reduction, and at the same time, it maintains minimum numerical impacts to the datasets. In addition, some issues are discussed to increase the archive efficiency while retaining a complete set of meta data for each archived file.
FogBank: a single cell segmentation across multiple cell lines and image modalities.
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary
2014-12-30
Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.
Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping
NASA Astrophysics Data System (ADS)
Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.
2016-06-01
High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.
NASA Astrophysics Data System (ADS)
Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya
2018-04-01
In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.
Pencil-beam redefinition algorithm dose calculations for electron therapy treatment planning
NASA Astrophysics Data System (ADS)
Boyd, Robert Arthur
2001-08-01
The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use.
Classification of urban features using airborne hyperspectral data
NASA Astrophysics Data System (ADS)
Ganesh Babu, Bharath
Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.
NASA Astrophysics Data System (ADS)
Zhou, Lei; Li, Zhengying; Xiang, Na; Bao, Xiaoyi
2018-06-01
A high speed quasi-distributed demodulation method based on the microwave photonics and the chromatic dispersion effect is designed and implemented for weak fiber Bragg gratings (FBGs). Due to the effect of dispersion compensation fiber (DCF), FBG wavelength shift leads to the change of the difference frequency signal at the mixer. With the way of crossing microwave sweep cycle, all wavelengths of cascade FBGs can be high speed obtained by measuring the frequencies change. Moreover, through the introduction of Chirp-Z and Hanning window algorithm, the analysis of difference frequency signal is achieved very well. By adopting the single-peak filter as a reference, the length disturbance of DCF caused by temperature can be also eliminated. Therefore, the accuracy of this novel method is greatly improved, and high speed demodulation of FBGs can easily realize. The feasibility and performance are experimentally demonstrated using 105 FBGs with 0.1% reflectivity, 1 m spatial interval. Results show that each grating can be distinguished well, and the demodulation rate is as high as 40 kHz, the accuracy is about 8 pm.
Münßinger, Jana I.; Halder, Sebastian; Kleih, Sonja C.; Furdea, Adrian; Raco, Valerio; Hösle, Adi; Kübler, Andrea
2010-01-01
Brain–computer interfaces (BCIs) enable paralyzed patients to communicate; however, up to date, no creative expression was possible. The current study investigated the accuracy and user-friendliness of P300-Brain Painting, a new BCI application developed to paint pictures using brain activity only. Two different versions of the P300-Brain Painting application were tested: A colored matrix tested by a group of ALS-patients (n = 3) and healthy participants (n = 10), and a black and white matrix tested by healthy participants (n = 10). The three ALS-patients achieved high accuracies; two of them reaching above 89% accuracy. In healthy subjects, a comparison between the P300-Brain Painting application (colored matrix) and the P300-Spelling application revealed significantly lower accuracy and P300 amplitudes for the P300-Brain Painting application. This drop in accuracy and P300 amplitudes was not found when comparing the P300-Spelling application to an adapted, black and white matrix of the P300-Brain Painting application. By employing a black and white matrix, the accuracy of the P300-Brain Painting application was significantly enhanced and reached the accuracy of the P300-Spelling application. ALS-patients greatly enjoyed P300-Brain Painting and were able to use the application with the same accuracy as healthy subjects. P300-Brain Painting enables paralyzed patients to express themselves creatively and to participate in the prolific society through exhibitions. PMID:21151375
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
An information-theoretic approach to designing the plane spacing for multifocal plane microscopy
Tahmasbi, Amir; Ram, Sripad; Chao, Jerry; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Multifocal plane microscopy (MUM) is a 3D imaging modality which enables the localization and tracking of single molecules at high spatial and temporal resolution by simultaneously imaging distinct focal planes within the sample. MUM overcomes the depth discrimination problem of conventional microscopy and allows high accuracy localization of a single molecule in 3D along the z-axis. An important question in the design of MUM experiments concerns the appropriate number of focal planes and their spacings to achieve the best possible 3D localization accuracy along the z-axis. Ideally, it is desired to obtain a 3D localization accuracy that is uniform over a large depth and has small numerical values, which guarantee that the single molecule is continuously detectable. Here, we address this concern by developing a plane spacing design strategy based on the Fisher information. In particular, we analyze the Fisher information matrix for the 3D localization problem along the z-axis and propose spacing scenarios termed the strong coupling and the weak coupling spacings, which provide appropriate 3D localization accuracies. Using these spacing scenarios, we investigate the detectability of the single molecule along the z-axis and study the effect of changing the number of focal planes on the 3D localization accuracy. We further review a software module we recently introduced, the MUMDesignTool, that helps to design the plane spacings for a MUM setup. PMID:26113764
Space telescope scientific instruments
NASA Technical Reports Server (NTRS)
Leckrone, D. S.
1979-01-01
The paper describes the Space Telescope (ST) observatory, the design concepts of the five scientific instruments which will conduct the initial observatory observations, and summarizes their astronomical capabilities. The instruments are the wide-field and planetary camera (WFPC) which will receive the highest quality images, the faint-object camera (FOC) which will penetrate to the faintest limiting magnitudes and achieve the finest angular resolution possible, and the faint-object spectrograph (FOS), which will perform photon noise-limited spectroscopy and spectropolarimetry on objects substantially fainter than those accessible to ground-based spectrographs. In addition, the high resolution spectrograph (HRS) will provide higher spectral resolution with greater photometric accuracy than previously possible in ultraviolet astronomical spectroscopy, and the high-speed photometer will achieve precise time-resolved photometric observations of rapidly varying astronomical sources on short time scales.
Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator
NASA Astrophysics Data System (ADS)
Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin
2016-06-01
Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.
ERIC Educational Resources Information Center
Kaiser, Johanna; Südkamp, Anna; Möller, Jens
2017-01-01
Teachers' judgments of students' academic achievement are not only affected by the achievement themselves but also by several other characteristics such as ethnicity, gender, and minority status. In real-life classrooms, achievement and further characteristics are often confounded. We disentangled achievement, ethnicity and minority status and…
New concept for in-line OLED manufacturing
NASA Astrophysics Data System (ADS)
Hoffmann, U.; Landgraf, H.; Campo, M.; Keller, S.; Koening, M.
2011-03-01
A new concept of a vertical In-Line deposition machine for large area white OLED production has been developed. The concept targets manufacturing on large substrates (>= Gen 4, 750 x 920 mm2) using linear deposition source achieving a total material utilization of >= 50 % and tact time down to 80 seconds. The continuously improved linear evaporation sources for the organic material achieve thickness uniformity on Gen 4 substrate of better than +/- 3 % and stable deposition rates down to less than 0.1 nm m/min and up to more than 100 nm m/min. For Lithium-Fluoride but also for other high evaporation temperature materials like Magnesium or Silver a linear source with uniformity better than +/- 3 % has been developed. For Aluminum we integrated a vertical oriented point source using wire feed to achieve high (> 150 nm m/min) and stable deposition rates. The machine concept includes a new vertical vacuum handling and alignment system for Gen 4 shadow masks. A complete alignment cycle for the mask can be done in less than one minute achieving alignment accuracy in the range of several 10 μm.
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
Samuel, Oluwarotimi Williams; Geng, Yanjuan; Li, Xiangxin; Li, Guanglin
2017-10-28
To control multiple degrees of freedom (MDoF) upper limb prostheses, pattern recognition (PR) of electromyogram (EMG) signals has been successfully applied. This technique requires amputees to provide sufficient EMG signals to decode their limb movement intentions (LMIs). However, amputees with neuromuscular disorder/high level amputation often cannot provide sufficient EMG control signals, and thus the applicability of the EMG-PR technique is limited especially to this category of amputees. As an alternative approach, electroencephalograph (EEG) signals recorded non-invasively from the brain have been utilized to decode the LMIs of humans. However, most of the existing EEG based limb movement decoding methods primarily focus on identifying limited classes of upper limb movements. In addition, investigation on EEG feature extraction methods for the decoding of multiple classes of LMIs has rarely been considered. Therefore, 32 EEG feature extraction methods (including 12 spectral domain descriptors (SDDs) and 20 time domain descriptors (TDDs)) were used to decode multiple classes of motor imagery patterns associated with different upper limb movements based on 64-channel EEG recordings. From the obtained experimental results, the best individual TDD achieved an accuracy of 67.05 ± 3.12% as against 87.03 ± 2.26% for the best SDD. By applying a linear feature combination technique, an optimal set of combined TDDs recorded an average accuracy of 90.68% while that of the SDDs achieved an accuracy of 99.55% which were significantly higher than those of the individual TDD and SDD at p < 0.05. Our findings suggest that optimal feature set combination would yield a relatively high decoding accuracy that may improve the clinical robustness of MDoF neuroprosthesis. The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
NASA Astrophysics Data System (ADS)
Mandic, M.; Stöbener, N.; Smajgl, D.
2017-12-01
For many decades different instrumental methods involving generations of the isotope ratio mass spectrometers with different periphery units for sample preparation, have provided scientifically required high precision, and high throughput of samples for varies application - from geological and hydrological to food and forensic. With this work we introduce automated measurement of δ13C and δ18O from solid carbonate samples, DIC and δ18O of water. We have demonstrated usage of a Thermo Scientific™ Delta Ray™ IRIS with URI Connect on certified reference materials and confirmed the high achievable accuracy and a precision better then <0.1‰ for both δ13C and δ18O, in the laboratory or the field with same precision and throughput of samples. With equilibration method for determination of δ18O in water samples, which we present in this work, achieved repeatability and accuracy are 0.12‰ and 0.68‰ respectively, which fulfill requirements of regulatory methods. The preparation of the samples for carbonate and DIC analysis on the Delta Ray IRIS with URI Connect is similar to the previously mentioned Gas Bench II methods. Samples are put into vials and phosphoric acid is added. The resulting sample-acid chemical reaction releases CO2 gas, which is then introduced into the Delta Ray IRIS via the Variable Volume. Three international standards of carbonate materials (NBS-18, NBS-19 and IAEA-CO-1) were analyzed. NBS-18 and NBS-19 were used as standards for calibration, and IAEA-CO-1 was treated as unknown. For water sample analysis equilibration method with 1% of CO2 in dry air was used. Test measurements and conformation of precision and accuracy of method determination δ18O in water samples were done with three lab standards, namely ANST, OCEAN 2 and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values. The Principle of Identical Treatment was applied in sample and standard preparation, in measurement procedure, as well as in the evaluation of the results.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, W; Degenhart, A D; Collinger, J L; Vinjamuri, R; Sudre, G P; Adelson, P D; Holder, D L; Leuthardt, E C; Moran, D W; Boninger, M L; Schwartz, A B; Crammond, D J; Tyler-Kabara, E C; Weber, D J
2009-01-01
In this study human motor cortical activity was recorded with a customized micro-ECoG grid during individual finger movements. The quality of the recorded neural signals was characterized in the frequency domain from three different perspectives: (1) coherence between neural signals recorded from different electrodes, (2) modulation of neural signals by finger movement, and (3) accuracy of finger movement decoding. It was found that, for the high frequency band (60-120 Hz), coherence between neighboring micro-ECoG electrodes was 0.3. In addition, the high frequency band showed significant modulation by finger movement both temporally and spatially, and a classification accuracy of 73% (chance level: 20%) was achieved for individual finger movement using neural signals recorded from the micro-ECoG grid. These results suggest that the micro-ECoG grid presented here offers sufficient spatial and temporal resolution for the development of minimally-invasive brain-computer interface applications.
Ye, Guangming; Cai, Xuejian; Wang, Biao; Zhou, Zhongxian; Yu, Xiaohua; Wang, Weibin; Zhang, Jiandong; Wang, Yuhai; Dong, Jierong; Jiang, Yunyun
2008-11-04
A simple, accurate and rapid method for simultaneous analysis of vancomycin and ceftazidime in cerebrospinal fluid (CSF), utilizing high-performance liquid chromatography (HPLC), has been developed and thoroughly validated to satisfy strict FDA guidelines for bioanalytical methods. Protein precipitation was used as the sample pretreatment method. In order to increase the accuracy, tinidazole was chosen as the internal standard. Separation was achieved on a Diamonsil C18 column (200 mm x 4.6mm I.D., 5 microm) using a mobile phase composed of acetonitrile and acetate buffer (pH 3.5) (8:92, v/v) at room temperature (25 degrees C), and the detection wavelength was 240 nm. All the validation data, such as accuracy, precision, and inter-day repeatability, were within the required limits. The method was applied to determine vancomycin and ceftazidime concentrations in CSF in five craniotomy patients.
Vegetational analysis with Skylab-3 imagery. [Perquimans County, North Carolina
NASA Technical Reports Server (NTRS)
Welby, C. W. (Principal Investigator); Holman, R. E.
1975-01-01
The author has identified the following significant results. Color infrared photography from Skylab 3 appeared to be superior to ERTS imagery in a vegetational study of northeastern North Carolina. An accuracy of 87% was achieved in delimiting species composition and zonation patterns of three coastal, vegetation classes. A vegetation map of Perquimans County, North Carolina, seemed to have a high degree of correlation with information provided by high altitude U-2 photography. Random verification sites revealed an overall interpretation accuracy above 84%. Comparison of maps drawn utilizing Skylab photography with North Carolina Dept. of Agriculture estimates of crop acreage revealed some marked discrepancies. The chief difference lies in the nonagricultural category in which there is a 30% discrepancy. This fact raised some questions as to the definition of nonagricultural land uses and methods used by the State Dept. of Agriculture to determine actual percentages of crops grown.
High accuracy LADAR scene projector calibration sensor development
NASA Astrophysics Data System (ADS)
Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.
2008-04-01
A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.
Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong
2011-03-01
A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of citrus orchard inventory using LISS-III and LISS-IV data
NASA Astrophysics Data System (ADS)
Singh, Niti; Chaudhari, K. N.; Manjunath, K. R.
2016-04-01
In India, in terms of area under cultivation, citrus is the third most cultivated fruit crop after Banana and Mango. Among citrus group, lime is one of the most important horticultural crops in India as the demand for its consumption is very high. Hence, preparing citrus crop inventories using remote sensing techniques would help in maintaining a record of its area and production statistics. This study shows how accurately citrus orchard can be classified using both IRS Resourcesat-2 LISS-III and LISS-IV data and depicts the optimum bio-widow for procuring satellite data to achieve high classification accuracy required for maintaining inventory of crop. Findings of the study show classification accuracy increased from 55% (using LISS-III) to 77% (using LISS-IV). Also, according to classified outputs and NDVI values obtained, April and May months were identified as optimum bio-window for citrus crop identification.
The rapid terrain visualization interferometric synthetic aperture radar sensor
NASA Astrophysics Data System (ADS)
Graham, Robert H.; Bickel, Douglas L.; Hensley, William H.
2003-11-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
New high-precision drift-tube detectors for the ATLAS muon spectrometer
NASA Astrophysics Data System (ADS)
Kroha, H.; Fakhrutdinov, R.; Kozhin, A.
2017-06-01
Small-diameter muon drift tube (sMDT) detectors have been developed for upgrades of the ATLAS muon spectrometer. With a tube diameter of 15 mm, they provide an about an order of magnitude higher rate capability than the present ATLAS muon tracking detectors, the MDT chambers with 30 mm tube diameter. The drift-tube design and the construction methods have been optimised for mass production and allow for complex shapes required for maximising the acceptance. A record sense wire positioning accuracy of 5 μm has been achieved with the new design. In the serial production, the wire positioning accuracy is routinely better than 10 μm. 14 new sMDT chambers are already operational in ATLAS, further 16 are under construction for installation in the 2019-2020 LHC shutdown. For the upgrade of the barrel muon spectrometer for High-Luminosity LHC, 96 sMDT chambers will be contructed between 2020 and 2024.
High-speed photogrammetry system for measuring the kinematics of insect wings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, Iain D.; Lawson, Nicholas J.; Harvey, Andrew R.
2006-06-10
We describe and characterize an experimental system to perform shape measurements on deformable objects using high-speed close-range photogrammetry. The eventual application is to extract the kinematics of several marked points on an insect wing during tethered and hovering flight. We investigate the performance of the system with a small number of views and determine an empirical relation between the mean pixel error of the optimization routine and the position error. Velocity and acceleration are calculated by numerical differencing, and their relation to the position errors is verified. For a field of view of {approx}40mmx40 mm, a rms accuracy of 30more » {mu}m in position, 150 mm/s in velocity, and 750 m/s2 in acceleration at 5000 frames/s is achieved. This accuracy is sufficient to measure the kinematics of hoverfly flight.« less
NASA/Cousteau ocean bathymetry experiment. Remote bathymetry using high gain LANDSAT data
NASA Technical Reports Server (NTRS)
Polcyn, F. C.
1976-01-01
Satellite remote bathymetry was varified to 22 m depths where water clarity was defined by alpha = .058 1/m and bottom reflection, r(b), was 26%. High gain band 4 and band 5 CCT data from LANDSAT 1 was used for a test site in the Bahama Islands and near Florida. Near Florida where alpha = .11 1/m and r(b) = 20%, depths to 10 m were verified. Depth accuracies within 10% rms were achieved. Position accuracies within one LANDSAT pixel were obtained by reference to the Transit navigation satellites. The Calypso and the Beayondan, two ships, were at anchor on each of the seven days during LANDSAT 1 and 2 overpasses: LORAN C position information was used when the ships were underway making depth transects. Results are expected to be useful for updating charts showing shoals hazardous to navigation or in monitoring changes in nearshore topography.
Optimization of the scan protocols for CT-based material extraction in small animal PET/CT studies
NASA Astrophysics Data System (ADS)
Yang, Ching-Ching; Yu, Jhih-An; Yang, Bang-Hung; Wu, Tung-Hsin
2013-12-01
We investigated the effects of scan protocols on CT-based material extraction to minimize radiation dose while maintaining sufficient image information in small animal studies. The phantom simulation experiments were performed with the high dose (HD), medium dose (MD) and low dose (LD) protocols at 50, 70 and 80 kVp with varying mA s. The reconstructed CT images were segmented based on Hounsfield unit (HU)-physical density (ρ) calibration curves and the dual-energy CT-based (DECT) method. Compared to the (HU;ρ) method performed on CT images acquired with the 80 kVp HD protocol, a 2-fold improvement in segmentation accuracy and a 7.5-fold reduction in radiation dose were observed when the DECT method was performed on CT images acquired with the 50/80 kVp LD protocol, showing the possibility to reduce radiation dose while achieving high segmentation accuracy.
The effect of clock, media, and station location errors on Doppler measurement accuracy
NASA Technical Reports Server (NTRS)
Miller, J. K.
1993-01-01
Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Ke; Li Yanqiu; Wang Hai
Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less
NASA Astrophysics Data System (ADS)
Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong
2017-10-01
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
HEp-2 cell image classification method based on very deep convolutional networks with small datasets
NASA Astrophysics Data System (ADS)
Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping
2017-07-01
Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.
A robust method of computing finite difference coefficients based on Vandermonde matrix
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin
2018-05-01
When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.
Laser confocal measurement system for curvature radius of lenses based on grating ruler
NASA Astrophysics Data System (ADS)
Tian, Jiwei; Wang, Yun; Zhou, Nan; Zhao, Weirui; Zhao, Weiqian
2015-02-01
In the modern optical measurement field, the radius of curvature (ROC) is one of the fundamental parameters of optical lens. Its measurement accuracy directly affects the other optical parameters, such as focal length, aberration and so on, which significantly affect the overall performance of the optical system. To meet the demand of measurement instruments for radius of curvature (ROC) with high accuracy in the market, we develop a laser confocal radius measurement system with grating ruler. The system uses the peak point of the confocal intensity curve to precisely identify the cat-eye and confocal positions and then measure the distance between these two positions by using the grating ruler, thereby achieving the high-precision measurement for the ROC. The system has advantages of high focusing sensitivity and anti-environment disturbance ability. And the preliminary theoretical analysis and experiments show that the measuring repeatability can be up to 0.8 um, which can provide an effective way for the accurate measurement of ROC.
The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry
NASA Astrophysics Data System (ADS)
Harshaw, Richard; Rowe, David; Genet, Russell
2017-01-01
Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.
Star Tracker Based ATP System Conceptual Design and Pointing Accuracy Estimation
NASA Technical Reports Server (NTRS)
Orfiz, Gerardo G.; Lee, Shinhak
2006-01-01
A star tracker based beaconless (a.k.a. non-cooperative beacon) acquisition, tracking and pointing concept for precisely pointing an optical communication beam is presented as an innovative approach to extend the range of high bandwidth (> 100 Mbps) deep space optical communication links throughout the solar system and to remove the need for a ground based high power laser as a beacon source. The basic approach for executing the ATP functions involves the use of stars as the reference sources from which the attitude knowledge is obtained and combined with high bandwidth gyroscopes for propagating the pointing knowledge to the beam pointing mechanism. Details of the conceptual design are presented including selection of an orthogonal telescope configuration and the introduction of an optical metering scheme to reduce misalignment error. Also, estimates are presented that demonstrate that aiming of the communications beam to the Earth based receive terminal can be achieved with a total system pointing accuracy of better than 850 nanoradians (3 sigma) from anywhere in the solar system.
Amokrane, S; Ayadim, A; Malherbe, J G
2005-11-01
A simple modification of the reference hypernetted chain (RHNC) closure of the multicomponent Ornstein-Zernike equations with bridge functions taken from Rosenfeld's hard-sphere bridge functional is proposed. Its main effect is to remedy the major limitation of the RHNC closure in the case of highly asymmetric mixtures--the wide domain of packing fractions in which it has no solution. The modified closure is also much faster, while being of similar complexity. This is achieved with a limited loss of accuracy, mainly for the contact value of the big sphere correlation functions. Comparison with simulation shows that inside the RHNC no-solution domain, it provides a good description of the structure, while being clearly superior to all the other closures used so far to study highly asymmetric mixtures. The generic nature of this closure and its good accuracy combined with a reduced no-solution domain open up the possibility to study the phase diagram of complex fluids beyond the hard-sphere model.
NASA Astrophysics Data System (ADS)
Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.
2017-06-01
Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.
3D Higher Order Modeling in the BEM/FEM Hybrid Formulation
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.
2000-01-01
Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
Low-cost precision rotary index calibration
NASA Astrophysics Data System (ADS)
Ng, T. W.; Lim, T. S.
2005-08-01
The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Doelling, David R.; Young, David F.; Loeb, Norman G.; Garber, Donald P.; MacDonnell, David G.
2008-01-01
vAs the potential impacts of global climate change become more clear [1], the need to determine the accuracy of climate prediction over decade-to-century time scales has become an urgent and critical challenge. The most critical tests of climate model predictions will occur using observations of decadal changes in climate forcing, response, and feedback variables. Many of these key climate variables are observed by remotely sensing the global distribution of reflected solar spectral and broadband radiance. These "reflected solar" variables include aerosols, clouds, radiative fluxes, snow, ice, vegetation, ocean color, and land cover. Achieving sufficient satellite instrument accuracy, stability, and overlap to rigorously observe decadal change signals has proven very difficult in most cases and has not yet been achieved in others [2]. One of the earliest efforts to make climate quality observations was for Earth Radiation Budget: Nimbus 6/7 in the late 1970s, ERBE in the 1980s/90s, and CERES in 2000s are examples of the most complete global records. The recent CERES data products have carried out the most extensive intercomparisons because if the need to merge data from up to 11 instruments (CERES, MODIS, geostationary imagers) on 7 spacecraft (Terra, Aqua, and 5 geostationary) for any given month. In order to achieve climate calibration for cloud feedbacks, the radiative effect of clear-sky, all-sky, and cloud radiative effect must all be made with very high stability and accuracy. For shortwave solar reflected flux, even the 1% CERES broadband absolute accuracy (1-sigma confidence bound) is not sufficient to allow gaps in the radiation record for decadal climate change. Typical absolute accuracy for the best narrowband sensors like SeaWiFS, MISR, and MODIS range from 2 to 4% (1-sigma). IPCC greenhouse gas radiative forcing is approx. 0.6 W/sq m per decade or 0.6% of the global mean shortwave reflected flux, so that a 50% cloud feedback would change the global reflected flux by approx. 0.3 W/sq m or 0.3% per decade in broadband SW calibration change. Recent results comparing CERES reflected flux changes with MODIS, MISR, and SeaWiFS narrowband changes concluded that only SeaWiFS and CERES were approaching sufficient stability in calibration for decadal climate change [3]. Results using deep convective clouds in the optically thick limit as a stability target may prove very effective for improving past data sets like ISCCP. Results for intercalibration of geostationary imagers to CERES using an entire month of regional nearly coincident data demonstrates new approaches to constraining the calibration of current geostationary imagers. The new Decadal Survey Mission CLARREO is examining future approaches to a "NIST-in-Orbit" approach of very high absolute accuracy reference radiometers that cover the full solar and infrared spectrum at high spectral resolution but at low spatial resolution. Sampling studies have shown that a precessing CLARREO mission could calibrate other geo and leo reflected solar radiation and thermal infrared sensors.