The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping
NASA Astrophysics Data System (ADS)
Mostafa, M. M. R.
2017-08-01
Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.
Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo
2010-05-01
A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.
Effect of Body Mass Index on Digital Templating for Total Hip Arthroplasty.
Sershon, Robert A; Diaz, Alejandro; Bohl, Daniel D; Levine, Brett R
2017-03-01
Digital templating is becoming more prevalent in orthopedics. Recent investigations report high accuracy using digital templating in total hip arthroplasty (THA); however, the effect of body mass index (BMI) on templating accuracy is not well described. Digital radiographs of 603 consecutive patients (645 hips) undergoing primary THA by a single surgeon were digitally templated using OrthoView (Jacksonville, FL). A 25-mm metallic sphere was used as a calibration marker. Preoperative digital hip templates were compared with the final implant size. Hips were stratified into groups based on BMI: BMI <30 (315), BMI 30-35 (132), BMI 35-40 (97), and BMI >40 (101). Accuracy between templating and final size did not vary by BMI for acetabular or femoral components. Digital templating was within 2 sizes of the final acetabular and femoral implants in 99.1% and 97.1% of cases, respectively. Digital templating is an effective means of predicting the final size of THA components. BMI does not appear to play a major role in altering THA digital templating accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-02-01
Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.
Accuracy of frozen section in the diagnosis of ovarian tumours.
Toneva, F; Wright, H; Razvi, K
2012-07-01
The purpose of our retrospective study was to assess the accuracy of intraoperative frozen section diagnosis compared to final paraffin diagnosis in ovarian tumours at a gynaecological oncology centre in the UK. We analysed 66 cases and observed that frozen section consultation agreed with final paraffin diagnosis in 59 cases, which provided an accuracy of 89.4%. The overall sensitivity and specificity for all tumours were 85.4% and 100%, respectively. The positive predictive value (PPV) and negative predictive value (NPV) were 100% and 89.4%, respectively. Of the seven cases with discordant results, the majority were large, mucinous tumours, which is in line with previous studies. Our study demonstrated that despite its limitations, intraoperative frozen section has a high accuracy and sensitivity for assessing ovarian tumours; however, care needs to be taken with large, mucinous tumours.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Design and experimental validation of novel 3D optical scanner with zoom lens unit
NASA Astrophysics Data System (ADS)
Huang, Jyun-Cheng; Liu, Chien-Sheng; Chiang, Pei-Ju; Hsu, Wei-Yan; Liu, Jian-Liang; Huang, Bai-Hao; Lin, Shao-Ru
2017-10-01
Optical scanners play a key role in many three-dimensional (3D) printing and CAD/CAM applications. However, existing optical scanners are generally designed to provide either a wide scanning area or a high 3D reconstruction accuracy from a lens with a fixed focal length. In the former case, the scanning area is increased at the expense of the reconstruction accuracy, while in the latter case, the reconstruction performance is improved at the expense of a more limited scanning range. In other words, existing optical scanners compromise between the scanning area and the reconstruction accuracy. Accordingly, the present study proposes a new scanning system including a zoom-lens unit, which combines both a wide scanning area and a high 3D reconstruction accuracy. In the proposed approach, the object is scanned initially under a suitable low-magnification setting for the object size (setting 1), resulting in a wide scanning area but a poor reconstruction resolution in complicated regions of the object. The complicated regions of the object are then rescanned under a high-magnification setting (setting 2) in order to improve the accuracy of the original reconstruction results. Finally, the models reconstructed after each scanning pass are combined to obtain the final reconstructed 3D shape of the object. The feasibility of the proposed method is demonstrated experimentally using a laboratory-built prototype. It is shown that the scanner has a high reconstruction accuracy over a large scanning area. In other words, the proposed optical scanner has significant potential for 3D engineering applications.
Zhang, Jiarui; Zhang, Yingjie; Chen, Bo
2017-12-20
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.
Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker
2014-01-01
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. PMID:25085610
ERIC Educational Resources Information Center
Rudner, Lawrence
2016-01-01
In the machine learning literature, it is commonly accepted as fact that as calibration sample sizes increase, Naïve Bayes classifiers initially outperform Logistic Regression classifiers in terms of classification accuracy. Applied to subtests from an on-line final examination and from a highly regarded certification examination, this study shows…
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
NASA Astrophysics Data System (ADS)
Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan
2014-08-01
In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.
NASA Astrophysics Data System (ADS)
Rau, Thomas S.; Lexow, G. Jakob; Blume, Denise; Kluge, Marcel; Lenarz, Thomas; Majdani, Omid
2017-03-01
A new method for template-guided cochlear implantation surgery is proposed which has been developed to create a minimally invasive access to the inner ear. A first design of the surgical template was drafted, built, and finally tested regarding its accuracy. For individual finalization of the micro-stereotactic frame bone cement is utilized as this well-known and well-established material suggests ease of use as well as high clinical acceptance and enables both sterile and rapid handling. The new concept includes an alignment device, based on a passive hexapod with manually adjustable legs for temporary fixation of the separate parts in the patient-specific pose until the bone cement is spread and finally cured. Additionally, a corresponding evaluation method was developed to determine the accuracy of the microstereotactic frame in some initial experiments. In total 18 samples of the surgical template were fabricated based on previously planned trajectories. The mean positioning error at the target point was 0.30 mm with a standard deviation of 0.25 mm.
NASA Astrophysics Data System (ADS)
Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore
2017-10-01
This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface
Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.
2014-01-01
Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079
NASA Astrophysics Data System (ADS)
Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei
2017-10-01
In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study
Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
An oscillation-free flow solver based on flux reconstruction
NASA Astrophysics Data System (ADS)
Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.
2018-07-01
In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Fan, Bin; Wu, Fan
2017-08-01
The corrective calibration of the removal function plays an important role in the magnetorheological finishing (MRF) high-accuracy process. This paper mainly investigates the asymmetrical characteristic of the MRF removal function shape and further analyzes its influence on the surface residual error by means of an iteration algorithm and simulations. By comparing the ripple errors and convergence ratios based on the ideal MRF tool function and the deflected tool function, the mathematical models for calibrating the deviation of horizontal and flowing directions are presented. Meanwhile, revised mathematical models for the coordinate transformation of an MRF machine is also established. Furthermore, a Ø140-mm fused silica plane and a Ø196 mm, f/1∶1, fused silica concave sphere samples are taken as the experiments. After two runs, the plane mirror final surface error reaches PV 17.7 nm, RMS 1.75 nm, and the polishing time is 16 min in total; after three runs, the sphere mirror final surfer error reaches RMS 2.7 nm and the polishing time is 70 min in total. The convergence ratios are 96.2% and 93.5%, respectively. The spherical simulation error and the polishing result are almost consistent, which fully validate the efficiency and feasibility of the calibration method of MRF removal function error using for the high-accuracy subaperture optical manufacturing.
NASA Technical Reports Server (NTRS)
Khanenya, Nikolay; Paciotti, Gabriel; Forzani, Eugenio; Blecha, Luc
2016-01-01
This paper describes a high-precision optical metrology system - a unique ground test equipment which was designed and implemented for simultaneous precise contactless measurements of 6 degrees-of-freedom (3 translational + 3 rotational) of a space mechanism end-effector [1] in a thermally controlled ISO 5 clean environment. The developed contactless method reconstructs both position and attitude of the specimen from three cross-sections measured by 2D distance sensors [2]. The cleanliness is preserved by the hermetic test chamber filled with high purity nitrogen. The specimen's temperature is controlled by the thermostat [7]. The developed method excludes errors caused by the thermal deformations and manufacturing inaccuracies of the test jig. Tests and simulations show that the measurement accuracy of an object absolute position is of 20 micron in in-plane measurement (XY) and about 50 micron out of plane (Z). The typical absolute attitude is determined with an accuracy better than 3 arcmin in rotation around X and Y and better than 10 arcmin in Z. The metrology system is able to determine relative position and movement with an accuracy one order of magnitude lower than the absolute accuracy. Typical relative displacement measurement accuracies are better than 1 micron in X and Y and about 2 micron in Z. Finally, the relative rotation can be measured with accuracy better than 20 arcsec in any direction.
Owen Van Horne, Amanda J.; Green Fager, Melanie
2015-01-01
Purpose Children with specific language impairment (SLI) frequently have difficulty producing the past tense. This study aimed to quantify the relative influence of telicity (i.e., the completedness of an event), verb frequency, and stem final phonemes on the production of past tense by school-age children with SLI and their typically-developing (TD) peers. Method Archival elicited production data from children with SLI between the ages of 6 and 9 and TD peers ages 4 to 8 were reanalyzed. Past tense accuracy was predicted using measures of telicity, verb frequency measures, and properties of the final consonant of the verb stem. Result All children were highly accurate when verbs were telic, the inflected form was frequently heard in the past tense, and the word ended in a sonorant/ non-alveolar consonant. All children were less accurate when verbs were atelic, rarely heard in the past tense, or ended in a word final obstruent or alveolar consonant. SLI status depressed overall accuracy rates, but did not influence how facilitative a given factor was. Conclusion Some factors that have been believed to be useful only when children are first discovering past tense, such as telicity, appear to be influential in later years as well. PMID:25879455
Benchmark solution of the dynamic response of a spherical shell at finite strain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Versino, Daniele; Brock, Jerry S.
2016-09-28
Our paper describes the development of high fidelity solutions for the study of homogeneous (elastic and inelastic) spherical shells subject to dynamic loading and undergoing finite deformations. The goal of the activity is to provide high accuracy results that can be used as benchmark solutions for the verification of computational physics codes. Furthermore, the equilibrium equations for the geometrically non-linear problem are solved through mode expansion of the displacement field and the boundary conditions are enforced in a strong form. Time integration is performed through high-order implicit Runge–Kutta schemes. Finally, we evaluate accuracy and convergence of the proposed method bymore » means of numerical examples with finite deformations and material non-linearities and inelasticity.« less
Sensors and signal processing for high accuracy passenger counting : final report.
DOT National Transportation Integrated Search
2009-03-05
It is imperative for a transit system to track statistics about their ridership in order to plan bus routes. There exists a wide variety of methods for obtaining these statistics that range from relying on the driver to count people to utilizing came...
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
1985-10-01
83K0385 FINAL REPORT D Vol. 4 00 THERMAL EFFECTS ON THE ACCURACY OF LD NUME" 1ICALLY CONTROLLED MACHINE TOOLS PREPARED BY I Raghunath Venugopal and M...OF NUMERICALLY CONTROLLED MACHINE TOOLS 12 PERSONAL AJ’HOR(S) Venunorial, Raghunath and M. M. Barash 13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF...TOOLS Prepared by Raghunath Venugopal and M. M. Barash Accesion For Unannounced 0 Justification ........................................... October 1085
Applications of wavelets in interferometry and artificial vision
NASA Astrophysics Data System (ADS)
Escalona Z., Rafael A.
2001-08-01
In this paper we present a different point of view of phase measurements performed in interferometry, image processing and intelligent vision using Wavelet Transform. In standard and white-light interferometry, the phase function is retrieved by using phase-shifting, Fourier-Transform, cosinus-inversion and other known algorithms. Our novel technique presented here is faster, robust and shows excellent accuracy in phase determinations. Finally, in our second application, fringes are no more generate by some light interaction but result from the observation of adapted strip set patterns directly printed on the target of interest. The moving target is simply observed by a conventional vision system and usual phase computation algorithms are adapted to an image processing by wavelet transform, in order to sense target position and displacements with a high accuracy. In general, we have determined that wavelet transform presents properties of robustness, relative speed of calculus and very high accuracy in phase computations.
Technology Transfer Program (TTP). Quality Assurance System. Volume 2. Appendices
1980-03-03
LSCo Report No. - 2X23-5.1-4-I TECHNOLOGY TRANSFER PROGRAM (TTP) FINAL REPORT QUALITY ASSURANCE SYSTEM Appendix A Accuracy Control System QUALITY...4-1 TECHNOLOGY TRANSFER PROGRAM (TTP) FINAL REPORT QUALITY ASSURANCE SYSTEM Appendix A Accuracy Control System QUALITY ASSURANCE VOLUME 2 APPENDICES...prepared by: Livingston Shipbuilding Company Orange, Texas March 3, 1980 APPENDIX A ACCURACY CONTROL SYSTEM . IIII MARINE TECHNOLOGY. INC. HP-121
Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.
Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng
2015-12-01
This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
NASA Astrophysics Data System (ADS)
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
Parallelism measurement for base plate of standard artifact with multiple tactile approaches
NASA Astrophysics Data System (ADS)
Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie
2018-01-01
Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.
Efficacy of High Frequency Ultrasound in Localization and Characterization of Orbital Lesions
Gurushankar, G; Bhimarao; Kadakola, Bindushree
2015-01-01
Background The complicated anatomy of orbit and the wide spectrum of pathological conditions present a formidable challenge for early diagnosis, which is critical for management. Ultrasonography provides a detailed cross sectional anatomy of the entire globe with excellent topographic visualization and real time display of the moving organ. Objectives of the study To evaluate the efficacy of high frequency Ultrasound in localization of orbital diseases and to characterize various orbital pathologies sonologically. Materials and Methods Hundred eyes of 85 patients were examined with ultrasound using linear high frequency probe (5 to 17 MHz) of PHILPS IU22 ultrasound system. Sonological diagnosis was made based on location, acoustic characteristics, kinetic properties and Doppler flow dynamics. Final diagnosis was made based on clinical & laboratory findings/higher cross-sectional imaging/surgery & histopathology (as applicable). Diagnostic accuracy of ultrasonography was evaluated and compared with final diagnosis. Results The distinction between ocular and extraocular pathologies was made in 100% of cases. The overall sensitivity, specificity, NPV and accuracy of ultrasonography were 94.2%, 98.8%, 92.2% & 94.9% respectively for diagnosis of ocular pathologies and 94.2%, 99.2%, 95.9% & 95.2% respectively for extra ocular pathologies. Conclusion Ultrasonography is a readily available, simple, cost effective, non ionizing and non invasive modality with overall high diagnostic accuracy in localising and characterising orbital pathologies. It has higher spatial and temporal resolution compared to CT/MRI. However, CT/MRI may be indicated in certain cases for the evaluation of calcifications, bony involvement, extension to adjacent structures and intracranial extension. PMID:26500977
Mino, Takuya; Maekawa, Kenji; Ueda, Akihiro; Higuchi, Shizuo; Sejima, Junichi; Takeuchi, Tetsuo; Hara, Emilio Satoshi; Kimura-Ono, Aya; Sonoyama, Wataru; Kuboki, Takuo
2015-04-01
The aim of this article was to investigate the accuracy in the reproducibility of full-arch implant provisional restorations to final restorations between a 3D Scan/CAD/CAM technique and the conventional method. We fabricated two final restorations for rehabilitation of maxillary and mandibular complete edentulous area and performed a computer-based comparative analysis of the accuracy in the reproducibility of the provisional restoration to final restoration between a 3D scanning and CAD/CAM (Scan/CAD/CAM) technique and the conventional silicone-mold transfer technique. Final restorations fabricated either by the conventional or Scan/CAD/CAM method were successfully installed in the patient. The total concave/convex volume discrepancy observed with the Scan/CAD/CAM technique was 503.50mm(3) and 338.15 mm(3) for maxillary and mandibular implant-supported prostheses (ISPs), respectively. On the other hand, total concave/convex volume discrepancy observed with the conventional method was markedly high (1106.84 mm(3) and 771.23 mm(3) for maxillary and mandibular ISPs, respectively). The results of the present report suggest that Scan/CAD/CAM method enables a more precise and accurate transfer of provisional restorations to final restorations compared to the conventional method. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
76 FR 23713 - Wireless E911 Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-28
... Location Accuracy Requirements AGENCY: Federal Communications Commission. ACTION: Final rule; announcement... contained in regulations concerning wireless E911 location accuracy requirements. The information collection... standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these...
Predicting outcome of Morris water maze test in vascular dementia mouse model with deep learning
Mogi, Masaki; Iwanami, Jun; Min, Li-Juan; Bai, Hui-Yu; Shan, Bao-Shuai; Kukida, Masayoshi; Kan-no, Harumi; Ikeda, Shuntaro; Higaki, Jitsuo; Horiuchi, Masatsugu
2018-01-01
The Morris water maze test (MWM) is one of the most popular and established behavioral tests to evaluate rodents’ spatial learning ability. The conventional training period is around 5 days, but there is no clear evidence or guidelines about the appropriate duration. In many cases, the final outcome of the MWM seems predicable from previous data and their trend. So, we assumed that if we can predict the final result with high accuracy, the experimental period could be shortened and the burden on testers reduced. An artificial neural network (ANN) is a useful modeling method for datasets that enables us to obtain an accurate mathematical model. Therefore, we constructed an ANN system to estimate the final outcome in MWM from the previously obtained 4 days of data in both normal mice and vascular dementia model mice. Ten-week-old male C57B1/6 mice (wild type, WT) were subjected to bilateral common carotid artery stenosis (WT-BCAS) or sham-operation (WT-sham). At 6 weeks after surgery, we evaluated their cognitive function with MWM. Mean escape latency was significantly longer in WT-BCAS than in WT-sham. All data were collected and used as training data and test data for the ANN system. We defined a multiple layer perceptron (MLP) as a prediction model using an open source framework for deep learning, Chainer. After a certain number of updates, we compared the predicted values and actual measured values with test data. A significant correlation coefficient was derived form the updated ANN model in both WT-sham and WT-BCAS. Next, we analyzed the predictive capability of human testers with the same datasets. There was no significant difference in the prediction accuracy between human testers and ANN models in both WT-sham and WT-BCAS. In conclusion, deep learning method with ANN could predict the final outcome in MWM from 4 days of data with high predictive accuracy in a vascular dementia model. PMID:29415035
Predicting outcome of Morris water maze test in vascular dementia mouse model with deep learning.
Higaki, Akinori; Mogi, Masaki; Iwanami, Jun; Min, Li-Juan; Bai, Hui-Yu; Shan, Bao-Shuai; Kukida, Masayoshi; Kan-No, Harumi; Ikeda, Shuntaro; Higaki, Jitsuo; Horiuchi, Masatsugu
2018-01-01
The Morris water maze test (MWM) is one of the most popular and established behavioral tests to evaluate rodents' spatial learning ability. The conventional training period is around 5 days, but there is no clear evidence or guidelines about the appropriate duration. In many cases, the final outcome of the MWM seems predicable from previous data and their trend. So, we assumed that if we can predict the final result with high accuracy, the experimental period could be shortened and the burden on testers reduced. An artificial neural network (ANN) is a useful modeling method for datasets that enables us to obtain an accurate mathematical model. Therefore, we constructed an ANN system to estimate the final outcome in MWM from the previously obtained 4 days of data in both normal mice and vascular dementia model mice. Ten-week-old male C57B1/6 mice (wild type, WT) were subjected to bilateral common carotid artery stenosis (WT-BCAS) or sham-operation (WT-sham). At 6 weeks after surgery, we evaluated their cognitive function with MWM. Mean escape latency was significantly longer in WT-BCAS than in WT-sham. All data were collected and used as training data and test data for the ANN system. We defined a multiple layer perceptron (MLP) as a prediction model using an open source framework for deep learning, Chainer. After a certain number of updates, we compared the predicted values and actual measured values with test data. A significant correlation coefficient was derived form the updated ANN model in both WT-sham and WT-BCAS. Next, we analyzed the predictive capability of human testers with the same datasets. There was no significant difference in the prediction accuracy between human testers and ANN models in both WT-sham and WT-BCAS. In conclusion, deep learning method with ANN could predict the final outcome in MWM from 4 days of data with high predictive accuracy in a vascular dementia model.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
Two sides of the same coin: Monetary incentives concurrently improve and bias confidence judgments.
Lebreton, Maël; Langdon, Shari; Slieker, Matthijs J; Nooitgedacht, Jip S; Goudriaan, Anna E; Denys, Damiaan; van Holst, Ruth J; Luigjes, Judy
2018-05-01
Decisions are accompanied by a feeling of confidence, that is, a belief about the decision being correct. Confidence accuracy is critical, notably in high-stakes situations such as medical or financial decision-making. We investigated how incentive motivation influences confidence accuracy by combining a perceptual task with a confidence incentivization mechanism. By varying the magnitude and valence (gains or losses) of monetary incentives, we orthogonalized their motivational and affective components. Corroborating theories of rational decision-making and motivation, our results first reveal that the motivational value of incentives improves aspects of confidence accuracy. However, in line with a value-confidence interaction hypothesis, we further show that the affective value of incentives concurrently biases confidence reports, thus degrading confidence accuracy. Finally, we demonstrate that the motivational and affective effects of incentives differentially affect how confidence builds on perceptual evidence. Together, these findings may provide new hints about confidence miscalibration in healthy or pathological contexts.
Microcomputer Keyboarding Curriculum for Middle and Junior High School Students. Final Report.
ERIC Educational Resources Information Center
Regional School District No. 10, Burlington, CT.
This project report describes the development of a seventh-grade curriculum to promote microcomputer keyboarding skills, i.e., learning correct alpha-numeric reaches, developing proficiency in making appropriate reaches, using correct fingering without looking at the keyboard, and attaining a degree of speed and accuracy. Although the curriculum…
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Generation and performance assessment of the global TanDEM-X digital elevation model
NASA Astrophysics Data System (ADS)
Rizzoli, Paola; Martone, Michele; Gonzalez, Carolina; Wecklich, Christopher; Borla Tridon, Daniela; Bräutigam, Benjamin; Bachmann, Markus; Schulze, Daniel; Fritz, Thomas; Huber, Martin; Wessel, Birgit; Krieger, Gerhard; Zink, Manfred; Moreira, Alberto
2017-10-01
The primary objective of the TanDEM-X mission is the generation of a global, consistent, and high-resolution digital elevation model (DEM) with unprecedented global accuracy. The goal is achieved by exploiting the interferometric capabilities of the two twin SAR satellites TerraSAR-X and TanDEM-X, which fly in a close orbit formation, acting as an X-band single-pass interferometer. Between December 2010 and early 2015 all land surfaces have been acquired at least twice, difficult terrain up to seven or eight times. The acquisition strategy, data processing, and DEM calibration and mosaicking have been systematically monitored and optimized throughout the entire mission duration, in order to fulfill the specification. The processing of all data has finally been completed in September 2016 and this paper reports on the final performance of the TanDEM-X global DEM and presents the acquisition and processing strategy which allowed to obtain the final DEM quality. The results confirm the outstanding global accuracy of the delivered product, which can be now utilized for both scientific and commercial applications.
NASA Astrophysics Data System (ADS)
Li, Dachao; Xu, Qingmei; Liu, Yu; Wang, Ridong; Xu, Kexin; Yu, Haixia
2017-11-01
A high-accuracy microdialysis method that can provide the reference values of glucose concentration in interstitial fluid for the accurate evaluation of non-invasive and minimally invasive continuous glucose monitoring is reported in this study. The parameters of the microdialysis process were firstly optimized by testing and analyzing three main factors that impact microdialysis recovery, including the perfusion rate, temperature, and glucose concentration in the area surrounding the microdialysis probe. The precision of the optimized microdialysis method was then determined in a simulation system that was designed and established in this study to simulate variations in continuous glucose concentration in the human body. Finally, the microdialysis method was tested for in vivo interstitial glucose concentration measurement.
Do recommender systems benefit users? a modeling approach
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
Local staging and assessment of colon cancer with 1.5-T magnetic resonance imaging
Blake, Helena; Jeyadevan, Nelesh; Abulafi, Muti; Swift, Ian; Toomey, Paul; Brown, Gina
2016-01-01
Objective: The aim of this study was to assess the accuracy of 1.5-T MRI in the pre-operative local T and N staging of colon cancer and identification of extramural vascular invasion (EMVI). Methods: Between 2010 and 2012, 60 patients with adenocarcinoma of the colon were prospectively recruited at 2 centres. 55 patients were included for final analysis. Patients received pre-operative 1.5-T MRI with high-resolution T2 weighted, gadolinium-enhanced T1 weighted and diffusion-weighted images. These were blindly assessed by two expert radiologists. Accuracy of the T-stage, N-stage and EMVI assessment was evaluated using post-operative histology as the gold standard. Results: Results are reported for two readers. Identification of T3 disease demonstrated an accuracy of 71% and 51%, sensitivity of 74% and 42% and specificity of 74% and 83%. Identification of N1 disease demonstrated an accuracy of 57% for both readers, sensitivity of 26% and 35% and specificity of 81% and 74%. Identification of EMVI demonstrated an accuracy of 74% and 69%, sensitivity 63% and 26% and specificity 80% and 91%. Conclusion: 1.5-T MRI achieved a moderate accuracy in the local evaluation of colon cancer, but cannot be recommended to replace CT on the basis of this study. Advances in knowledge: This study confirms that MRI is a viable alternative to CT for the local assessment of colon cancer, but this study does not reproduce the very high accuracy reported in the only other study to assess the accuracy of MRI in colon cancer staging. PMID:27226219
Stress and emotional valence effects on children's versus adolescents' true and false memory.
Quas, Jodi A; Rush, Elizabeth B; Yim, Ilona S; Edelstein, Robin S; Otgaar, Henry; Smeets, Tom
2016-01-01
Despite considerable interest in understanding how stress influences memory accuracy and errors, particularly in children, methodological limitations have made it difficult to examine the effects of stress independent of the effects of the emotional valence of to-be-remembered information in developmental populations. In this study, we manipulated stress levels in 7-8- and 12-14-year-olds and then exposed them to negative, neutral, and positive word lists. Shortly afterward, we tested their recognition memory for the words and false memory for non-presented but related words. Adolescents in the high-stress condition were more accurate than those in the low-stress condition, while children's accuracy did not differ across stress conditions. Also, among adolescents, accuracy and errors were higher for the negative than positive words, while in children, word valence was unrelated to accuracy. Finally, increases in children's and adolescents' cortisol responses, especially in the high-stress condition, were related to greater accuracy but not false memories and only for positive emotional words. Findings suggest that stress at encoding, as well as the emotional content of to-be-remembered information, may influence memory in different ways across development, highlighting the need for greater complexity in existing models of true and false memory formation.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-01-01
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Celestial Reference Frames at Multiple Radio Wavelengths
NASA Technical Reports Server (NTRS)
Jacobs, Christopher S.
2012-01-01
In 1997 the IAU adopted the International Celestial Reference Frame (ICRF) built from S/X VLBI data. In response to IAU resolutions encouraging the extension of the ICRF to additional frequency bands, VLBI frames have been made at 24, 32, and 43 gigahertz. Meanwhile, the 8.4 gigahertz work has been greatly improved with the 2009 release of the ICRF-2. This paper discusses the motivations for extending the ICRF to these higher radio bands. Results to date will be summarized including evidence that the high frequency frames are rapidly approaching the accuracy of the 8.4 gigahertz ICRF-2. We discuss current limiting errors and prospects for the future accuracy of radio reference frames. We note that comparison of multiple radio frames is characterizing the frequency dependent systematic noise floor from extended source morphology and core shift. Finally, given Gaia's potential for high accuracy optical astrometry, we have simulated the precision of a radio-optical frame tie to be approximately10-15 microarcseconds ((1-sigma) (1-standard deviation), per component).
Achieving Climate Change Absolute Accuracy in Orbit
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.;
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
Intra-Operative Frozen Sections for Ovarian Tumors – A Tertiary Center Experience
Arshad, Nur Zaiti Md; Ng, Beng Kwang; Paiman, Noor Asmaliza Md; Mahdy, Zaleha Abdullah; Noor, Rushdan Mohd
2018-01-01
Background: Accuracy of diagnosis with intra-operative frozen sections is extremely important in the evaluation of ovarian tumors so that appropriate surgical procedures can be selected. Study design: All patients who with intra-operative frozen sections for ovarian masses in a tertiary center over nine years from June 2008 until April 2017 were reviewed. Frozen section diagnosis and final histopathological reports were compared. Main outcome measures: Sensitivity, specificity, positive and negative predictive values of intra-operative frozen section as compared to final histopathological results for ovarian tumors. Results: A total of 92 cases were recruited for final evaluation. The frozen section diagnoses were comparable with the final histopathological reports in 83.7% of cases. The sensitivity, specificity, positive predictive value and negative predictive value for benign and malignant ovarian tumors were 95.6%, 85.1%, 86.0% and 95.2% and 69.2%, 100%, 100% and 89.2% respectively. For borderline ovarian tumors, the sensitivity and specificity were 76.2% and 88.7%, respectively; the positive predictive value was 66.7% and the negative predictive value was 92.7%. Conclusion: The accuracy of intra-operative frozen section diagnoses for ovarian tumors is high and this approach remains a reliable option in assessing ovarian masses intra-operatively. PMID:29373916
High accuracy position method based on computer vision and error analysis
NASA Astrophysics Data System (ADS)
Chen, Shihao; Shi, Zhongke
2003-09-01
The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Shangang, E-mail: 1198685580@qq.com; Li, Chengli, E-mail: chenglilichina@yeah.net; Yu, Xuejuan, E-mail: yuxuejuan2011@126.com
2015-04-15
ObjectiveThe purpose of our study was to evaluate the diagnostic accuracy of MRI-guided percutaneous transthoracic needle biopsy (PTNB) of solitary pulmonary nodules (SPNs).MethodsRetrospective review of 69 patients who underwent MR-guided PTNB of SPNs was performed. Each case was reviewed for complications. The final diagnosis was established by surgical pathology of the nodule or clinical and imaging follow-up. Pneumothorax rate and diagnostic accuracy were compared between two groups according to nodule diameter (≤2 vs. >2 cm) using χ{sup 2} chest and Fisher’s exact test, respectively.ResultsThe success rate of single puncture was 95.6 %. Twelve (17.4 %) patients had pneumothorax, with 1 (1.4 %) requiring chestmore » tube insertion. Mild hemoptysis occurred in 7 (7.2 %) patients. All of the sample material was sufficient for histological diagnostic evaluation. Pathological analysis of biopsy specimens showed 46 malignant, 22 benign, and 1 nondiagnostic nodule. The final diagnoses were 49 malignant nodules and 20 benign nodules basing on postoperative histopathology and clinical follow-up data. One nondiagnostic sample was excluded from calculating diagnostic performance. A sensitivity, specificity, accuracy, positive predictive value, and negative predictive value in diagnosing SPNs were 95.8, 100, 97.0, 100, and 90.9 %, respectively. Pneumothorax rate, diagnostic sensitivity, and accuracy were not significantly different between the two groups (P > 0.05).ConclusionsMRI-guided PTNB is safe, feasible, and high accurate diagnostic technique for pathologic diagnosis of pulmonary nodules.« less
Koehlinger, Keegan; Oleson, Jacob; McCreery, Ryan; Moeller, Mary Pat
2015-01-01
Purpose Production accuracy of s-related morphemes was examined in 3-year-olds with mild-to-severe hearing loss, focusing on perceptibility, articulation, and input frequency. Method Morphemes with /s/, /z/, and /ɪz/ as allomorphs (plural, possessive, third-person singular –s, and auxiliary and copula “is”) were analyzed from language samples gathered from 51 children (ages: 2;10 [years;months] to 3;8) who are hard of hearing (HH), all of whom used amplification. Articulation was assessed via the Goldman-Fristoe Test of Articulation–Second Edition, and monomorphemic word final /s/ and /z/ production. Hearing was measured via better ear pure tone average, unaided Speech Intelligibility Index, and aided sensation level of speech at 4 kHz. Results Unlike results reported for children with normal hearing, the group of children who are HH correctly produced the /ɪz/ allomorph more than /s/ and /z/ allomorphs. Relative accuracy levels for morphemes and sentence positions paralleled those of children with normal hearing. The 4-kHz sensation level scores (but not the better ear pure tone average or Speech Intelligibility Index), the Goldman-Fristoe Test of Articulation–Second Edition, and word final s/z use all predicted accuracy. Conclusions Both better hearing and higher articulation scores are associated with improved morpheme production, and better aided audibility in the high frequencies and word final production of s/z are particularly critical for morpheme acquisition in children who are HH. PMID:25650750
NASA Technical Reports Server (NTRS)
Frey, Bradley J.; Leviton, Douglas B.
2004-01-01
The optical designs of future NASA infrared (IR) missions and instruments, such as the James Webb Space Telescope's (JWST) Near-Mixed Camera (NIRCam), will rely on accurate knowledge of the index of refraction of various IR optical materials at cryogenic temperatures. To meet this need, we have developed a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS). In this paper we discuss the completion of the design and construction of CHARMS as well as the engineering details that constrained the final design and hardware implementation. In addition, we will present our first light, cryogenic, IR index of refraction data for LiF, BaF2, and CaF2, and compare our results to previously published data for these materials.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
NASA Technical Reports Server (NTRS)
Carpenter, M. H.
1988-01-01
The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Wening, Stefanie; Keith, Nina; Abele, Andrea E
2016-06-01
In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.
A study of universal modulation techniques applied to satellite data collection
NASA Technical Reports Server (NTRS)
1980-01-01
A universal modulation and frequency control system for use with data collection platform (DCP) transmitters is examined. The final design discussed can, under software/firmwave control, generate all of the specific digital data modulation formats currently used in the NASA satellite data collection service and can simultaneously synthesize the proper RF carrier frequencies employed. A novel technique for DCP time and frequency control is presented. The emissions of NBS radio station WWV/WWVH are received, detected, and finally decoded in microcomputer software to generate a highly accurate time base for the platform; with the assistance of external hardware, the microcomputer also directs the recalibration of all DCP oscillators to achieve very high frequency accuracies and low drift rates versus temperature, supply voltage, and time. The final programmable DCP design also employs direct microcomputer control of data reduction, formatting, transmitter switching, and system power management.
NASA Astrophysics Data System (ADS)
Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing
2017-09-01
The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.
R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope
2017-01-01
Rapid automatic detection of the fiducial points—namely, the P wave, QRS complex, and T wave—is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs. PMID:29065613
R Peak Detection Method Using Wavelet Transform and Modified Shannon Energy Envelope.
Park, Jeong-Seon; Lee, Sang-Woong; Park, Unsang
2017-01-01
Rapid automatic detection of the fiducial points-namely, the P wave, QRS complex, and T wave-is necessary for early detection of cardiovascular diseases (CVDs). In this paper, we present an R peak detection method using the wavelet transform (WT) and a modified Shannon energy envelope (SEE) for rapid ECG analysis. The proposed WTSEE algorithm performs a wavelet transform to reduce the size and noise of ECG signals and creates SEE after first-order differentiation and amplitude normalization. Subsequently, the peak energy envelope (PEE) is extracted from the SEE. Then, R peaks are estimated from the PEE, and the estimated peaks are adjusted from the input ECG. Finally, the algorithm generates the final R features by validating R-R intervals and updating the extracted R peaks. The proposed R peak detection method was validated using 48 first-channel ECG records of the MIT-BIH arrhythmia database with a sensitivity of 99.93%, positive predictability of 99.91%, detection error rate of 0.16%, and accuracy of 99.84%. Considering the high detection accuracy and fast processing speed due to the wavelet transform applied before calculating SEE, the proposed method is highly effective for real-time applications in early detection of CVDs.
Computerized lung cancer malignancy level analysis using 3D texture features
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei
2016-03-01
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
Two sides of the same coin: Monetary incentives concurrently improve and bias confidence judgments
Lebreton, Maël; Slieker, Matthijs J.; Nooitgedacht, Jip S.; van Holst, Ruth J.; Luigjes, Judy
2018-01-01
Decisions are accompanied by a feeling of confidence, that is, a belief about the decision being correct. Confidence accuracy is critical, notably in high-stakes situations such as medical or financial decision-making. We investigated how incentive motivation influences confidence accuracy by combining a perceptual task with a confidence incentivization mechanism. By varying the magnitude and valence (gains or losses) of monetary incentives, we orthogonalized their motivational and affective components. Corroborating theories of rational decision-making and motivation, our results first reveal that the motivational value of incentives improves aspects of confidence accuracy. However, in line with a value-confidence interaction hypothesis, we further show that the affective value of incentives concurrently biases confidence reports, thus degrading confidence accuracy. Finally, we demonstrate that the motivational and affective effects of incentives differentially affect how confidence builds on perceptual evidence. Together, these findings may provide new hints about confidence miscalibration in healthy or pathological contexts. PMID:29854944
Polarized 3He target and Final State Interactions in SiDIS
Del Dotto, Alessio; Kaptari, Leonid; Pace, Emanuele; ...
2017-01-03
Jefferson Lab is starting a wide experimental program aimed at studying the neutron’s structure, with a great emphasis on the extraction of the parton transverse-momentum distributions (TMDs). To this end, Semi-inclusive deep-inelastic scattering (SiDIS) experiments on polarized $^3$He will be carried out, providing, together with proton and deuteron data, a sound flavor decomposition of the TMDs. Here, given the expected high statistical accuracy, it is crucial to disentangle nuclear and partonic degrees of freedom to get an accurate theoretical description of both initial and final states. In this contribution, a preliminary study of the Final State Interaction (FSI) in themore » standard SiDIS, where a pion (or a Kaon) is detected in the final state is presented, in view of constructing a realistic description of the nuclear initial and final states.« less
Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore
Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.
2013-01-01
The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.
CUE USAGE IN VOLLEYBALL: A TIME COURSE COMPARISON OF ELITE, INTERMEDIATE AND NOVICE FEMALE PLAYERS
Vaeyens, R; Zeuwts, L; Philippaerts, R; Lenoir, M
2014-01-01
This study compared visual search strategies in adult female volleyball players of three levels. Video clips of the attack of the opponent team were presented on a large screen and participants reacted to the final pass before the spike. Reaction time, response accuracy and eye movement patterns were measured. Elite players had the highest response accuracy (97.50 ± 3.5%) compared to the intermediate (91.50 ± 4.7%) and novice players (83.50 ± 17.6%; p<0.05). Novices had a remarkably high range of reaction time but no significant differences were found in comparison to the reaction time of elite and intermediate players. In general, the three groups showed similar gaze behaviour with the apparent use of visual pivots at moments of reception and final pass. This confirms the holistic model of image perception for volleyball and suggests that expert players extract more information from parafoveal regions. PMID:25609887
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
The ship edge feature detection based on high and low threshold for remote sensing image
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Shengyang
2018-05-01
In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.
A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhuang, Yu
1997-01-01
In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.
Bridge Displacement Monitoring Method Based on Laser Projection-Sensing Technology
Zhao, Xuefeng; Liu, Hao; Yu, Yan; Xu, Xiaodong; Hu, Weitong; Li, Mingchu; Ou, Jingping
2015-01-01
Bridge displacement is the most basic evaluation index of the health status of a bridge structure. The existing measurement methods for bridge displacement basically fail to realize long-term and real-time dynamic monitoring of bridge structures, because of the low degree of automation and the insufficient precision, causing bottlenecks and restriction. To solve this problem, we proposed a bridge displacement monitoring system based on laser projection-sensing technology. First, the laser spot recognition method was studied. Second, the software for the displacement monitoring system was developed. Finally, a series of experiments using this system were conducted, and the results show that such a system has high measurement accuracy and speed. We aim to develop a low-cost, high-accuracy and long-term monitoring method for bridge displacement based on these preliminary efforts. PMID:25871716
Two-dimensional mesh embedding for Galerkin B-spline methods
NASA Technical Reports Server (NTRS)
Shariff, Karim; Moser, Robert D.
1995-01-01
A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.
Advanced Digital Signal Processing for Hybrid Lidar
2014-09-30
with a PC running LabVIEW performing the final calculations to obtain range measurements . A MATLAB- based system developed at Clarkson University in...the image contrast and resolution as well as the object ranging measurement accuracy. There have been various methods that attempt to reduce the...high speed modulation to help suppress backscatter while also providing an unambiguous range measurement . In general, it is desired to determine which
2008-01-01
color balancing, and trimming to the delivery tiles . For the Former Camp Beale site, orthophotos were created in blocks of 2.675 square kilometers, in...3-19 3.7.12 Data Density Effects - Orthophotos ...3-27 3.7.18 Lidar and Orthophoto Positional Accuracy ........................................... 3-29 4.0 PERFORMANCE ASSESSMENT
Design of all-weather celestial navigation system
NASA Astrophysics Data System (ADS)
Sun, Hongchi; Mu, Rongjun; Du, Huajun; Wu, Peng
2018-03-01
In order to realize autonomous navigation in the atmosphere, an all-weather celestial navigation system is designed. The research of celestial navigation system include discrimination method of comentropy and the adaptive navigation algorithm based on the P value. The discrimination method of comentropy is studied to realize the independent switching of two celestial navigation modes, starlight and radio. Finally, an adaptive filtering algorithm based on P value is proposed, which can greatly improve the disturbance rejection capability of the system. The experimental results show that the accuracy of the three axis attitude is better than 10″, and it can work all weather. In perturbation environment, the position accuracy of the integrated navigation system can be increased 20% comparing with the traditional method. It basically meets the requirements of the all-weather celestial navigation system, and it has the ability of stability, reliability, high accuracy and strong anti-interference.
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.
Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot
Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948
Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa
2018-06-05
Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.
NASA Astrophysics Data System (ADS)
Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad
2016-01-01
In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
A new method for weakening the combined effect of residual errors on multibeam bathymetric data
NASA Astrophysics Data System (ADS)
Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue
2014-12-01
Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
Investigations of black-hole spectra: Purely-imaginary modes and Kerr ringdown radiation
NASA Astrophysics Data System (ADS)
Zalutskiy, Maxim P.
When black holes are perturbed they give rise to characteristic waves that propagate outwards carrying information about the black hole. In the linear regime these waves are described in terms of quasinormal modes (QNM). Studying QNM is an important topic which may provide a connection to the quantum theory of gravity in addition to their astrophysical applications. Quasinormal modes correspond to complex frequencies where the real part represents oscillation and the imaginary part represents damping. We have developed a new code for calculating QNM with high precision and accuracy, which we applied to the Schwarzschild and Kerr geometries. The high accuracy of our calculations was a significant improvement over prior work, allowing us to compute QNM much closer to the negative imaginary axis (NIA) than it was possible before. The existence of QNM on the NIA has remained poorly understood, but our high accuracy studies have highlighted the importance of understanding their nature. In this work we show how the purely-imaginary modes can be calculated with the help of the theory of confluent Heun polynomials with the conclusion that all modes on the NIA correspond to polynomial solutions. We also show that certain types of these modes correspond to Kerr QNM. Finally, using our highly accurate QNM data we model the ringdown, a remnant black hole's decaying radiation. Ringdown occurs in the final stages of such violent astrophysical events as supernovae and black hole collisions. We use our model to analyse the ringdown waveforms from the publicly available binary black hole coalescence catalog maintained by the SXS collaboration. In our analysis we use a number of methods: Fourier transform, multi-mode nonlinear fitting and waveform overlap. Both our fitting and overlap approach allow inclusion of many modes in the ringdown model with the goal being to extract information about the nature of the astrophysical source of the ringdown signal.
Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan
1998-01-01
A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.
Administrative review process for adjudicating initial disability claims. Final rule.
2006-03-31
The Social Security Administration is committed to providing the high quality of service the American people expect and deserve. In light of the significant growth in the number of disability claims and the increased complexity of those claims, the need to make substantial changes in our disability determination process has become urgent. We are publishing a final rule that amends our administrative review process for applications for benefits that are based on whether you are disabled under title II of the Social Security Act (the Act), or applications for supplemental security income (SSI) payments that are based on whether you are disabled or blind under title XVI of the Act. We expect that this final rule will improve the accuracy, consistency, and timeliness of decision-making throughout the disability determination process.
A new method for measuring the rotational accuracy of rolling element bearings
NASA Astrophysics Data System (ADS)
Chen, Ye; Zhao, Xiangsong; Gao, Weiguo; Hu, Gaofeng; Zhang, Shizhen; Zhang, Dawei
2016-12-01
The rotational accuracy of a machine tool spindle has critical influence upon the geometric shape and surface roughness of finished workpiece. The rotational performance of the rolling element bearings is a main factor which affects the spindle accuracy, especially in the ultra-precision machining. In this paper, a new method is developed to measure the rotational accuracy of rolling element bearings of machine tool spindles. Variable and measurable axial preload is applied to seat the rolling elements in the bearing races, which is used to simulate the operating conditions. A high-precision (radial error is less than 300 nm) and high-stiffness (radial stiffness is 600 N/μm) hydrostatic reference spindle is adopted to rotate the inner race of the test bearing. To prevent the outer race from rotating, a 2-degrees of freedom flexure hinge mechanism (2-DOF FHM) is designed. Correction factors by using stiffness analysis are adopted to eliminate the influences of 2-DOF FHM in the radial direction. Two capacitive displacement sensors with nano-resolution (the highest resolution is 9 nm) are used to measure the radial error motion of the rolling element bearing, without separating the profile error as the traditional rotational accuracy metrology of the spindle. Finally, experimental measurements are performed at different spindle speeds (100-4000 rpm) and axial preloads (75-780 N). Synchronous and asynchronous error motion values are evaluated to demonstrate the feasibility and repeatability of the developed method and instrument.
Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator
NASA Astrophysics Data System (ADS)
Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin
2016-06-01
Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Hong, J.; Zhang, J.
2013-12-15
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.
Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y
2013-12-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.
Flame Kernel Interactions in a Turbulent Environment
2001-08-01
contours ranging from 1 ( fully burned) at the centre to 0 (unburned) on the outer contour. In each case the flames can clearly be seen to propagate outwards...called SENGA. The code solves a fully compressible reacting flow in three dimensions. High accuracy numerical schemes have been employed which are...Finally, results are presented and discussed for simulations with different initial non-dimensional turbulence intensities ranging from 5 to 23. 1
Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal
2017-03-01
Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally.
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally. PMID:22319266
Technology of focus detection for 193nm projection lithographic tool
NASA Astrophysics Data System (ADS)
Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong
2012-10-01
With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mi, Qing; Wang, Qi; Zang, Siyao; Chai, Zhaoer; Zhang, Jinnan; Ren, Xiaomin
2018-05-01
In this study, we developed a multifunctional device based on SnO2@rGO-coated fibers utilizing plasma treatment, dip coating, and microwave irradiation in sequence, and finally realized highly sensitive human motion monitoring, relatively good ethanol detection, and an obvious photo response. Moreover, the high level of comfort and compactness derived from highly elastic and comfortable fabrics contributes to the long-term availability and test accuracy. As an attempt at multifunctional integration of smart clothing, this work provides an attractive and relatively practical research direction.
Mi, Qing; Wang, Qi; Zang, Siyao; Chai, Zhaoer; Zhang, Jinnan; Ren, Xiaomin
2018-05-11
In this study, we developed a multifunctional device based on SnO 2 @rGO-coated fibers utilizing plasma treatment, dip coating, and microwave irradiation in sequence, and finally realized highly sensitive human motion monitoring, relatively good ethanol detection, and an obvious photo response. Moreover, the high level of comfort and compactness derived from highly elastic and comfortable fabrics contributes to the long-term availability and test accuracy. As an attempt at multifunctional integration of smart clothing, this work provides an attractive and relatively practical research direction.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.
Liu, Hua; Wu, Wen
2017-03-31
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking
Liu, Hua; Wu, Wen
2017-01-01
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347
An Improved BLE Indoor Localization with Kalman-Based Fusion: An Experimental Study
Röbesaat, Jenny; Zhang, Peilin; Abdelaal, Mohamed; Theel, Oliver
2017-01-01
Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter. PMID:28445421
Rogers, Katherine H; Le, Marina T; Buckels, Erin E; Kim, Mikayla; Biesanz, Jeremy C
2018-02-19
The Dark Tetrad traits (subclinical psychopathy, narcissism, Machiavellianism, and everyday sadism) have interpersonal consequences. At present, however, how these traits are associated with the accuracy and positivity of first impressions is not well understood. The present article addresses three primary questions. First, to what extent are perceiver levels of Dark Tetrad traits associated with differing levels of perceptive accuracy? Second, to what extent are target levels of Dark Tetrad traits associated with differing levels of expressive accuracy? Finally, to what extent can Dark Tetrad traits be differentiated when examining perceptions of and by others? In a round-robin design, undergraduate participants (N = 412) in small groups engaged in brief, naturalistic, unstructured dyadic interactions before providing impressions of their partner. Dark Tetrad traits were associated with being viewed and viewing others less distinctively accurately and more negatively. Interpersonal perceptions that included an individual scoring highly on one of the Dark Tetrad traits differed in important ways from interactions among individuals with more benevolent personalities. Notably, despite the similarities between the Dark Tetrad, traits had unique associations with interpersonal perceptions. © 2018 Wiley Periodicals, Inc.
Using learning automata to determine proper subset size in high-dimensional spaces
NASA Astrophysics Data System (ADS)
Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz
2017-03-01
In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.
Sharma, Avnish Kumar; Patidar, Rajesh Kumar; Daiya, Deepak; Joshi, Anandverdhan; Naik, Prasad Anant; Gupta, Parshotam Dass
2013-04-20
In this paper, a new method for alignment of the pinhole of a spatial filter (SF) has been proposed and demonstrated experimentally. The effect of the misalignment of the pinhole on the laser beam profiles has been calculated for circular and elliptical Gaussian laser beams. Theoretical computation has been carried out to illustrate the effect of an intensity mask, placed before the focusing lens of the SF, on the spatial beam profile after the pinhole of the SF. It is shown, both theoretically and experimentally, that a simple intensity mask, consisting of a black dot, can be used to visually align the pinhole with a high accuracy of 5% of the pinhole diameter. The accuracy may be further improved using a computer-based image processing algorithm. Finally, the proposed technique has been demonstrated to align a vacuum SF of a compact 40 J Nd:phosphate glass laser system.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy
2015-05-01
We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.
Multi-stage robust scheme for citrus identification from high resolution airborne images
NASA Astrophysics Data System (ADS)
Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier
2008-10-01
Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
Detecting the Water-soluble Chloride Distribution of Cement Paste in a High-precision Way.
Chang, Honglei; Mu, Song
2017-11-21
To improve the accuracy of the chloride distribution along the depth of cement paste under cyclic wet-dry conditions, a new method is proposed to obtain a high-precision chloride profile. Firstly, paste specimens are molded, cured, and exposed to cyclic wet-dry conditions. Then, powder samples at different specimen depths are grinded when the exposure age is reached. Finally, the water-soluble chloride content is detected using a silver nitrate titration method, and chloride profiles are plotted. The key to improving the accuracy of the chloride distribution along the depth is to exclude the error in the powderization, which is the most critical step for testing the distribution of chloride. Based on the above concept, the grinding method in this protocol can be used to grind powder samples automatically layer by layer from the surface inward, and it should be noted that a very thin grinding thickness (less than 0.5 mm) with a minimum error less than 0.04 mm can be obtained. The chloride profile obtained by this method better reflects the chloride distribution in specimens, which helps researchers to capture the distribution features that are often overlooked. Furthermore, this method can be applied to studies in the field of cement-based materials, which require high chloride distribution accuracy.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl
This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.
NASA Astrophysics Data System (ADS)
Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng
2009-02-01
Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.
Lu, Yanjun; Zhu, Yaowu; Shen, Na; Tian, Lei; Sun, Ziyong
2018-02-08
Limited data on the diagnostic accuracy of the Xpert MTB/RIF assay using bronchoalveolar lavage fluid from patients with suspected pulmonary tuberculosis (PTB) have been reported in China. Therefore, a retrospective study was designed to evaluate the diagnostic accuracy of this assay. Clinical, radiological, and microbiological characteristics of 238 patients with suspected PTB were reviewed retrospectively. The sensitivity, specificity, positive predictive value, and negative predictive value for the diagnosis of active PTB were calculated for the Xpert MTB/RIF assay using TB culture or final diagnosis based on clinical and radiological evaluation as the reference standard. The sensitivity and specificity of the Xpert MTB/RIF assay were 84.5% and 98.9%, respectively, and those for smear microscopy were 36.2% and 100%, respectively, when compared to the culture method. However, compared with the sensitivity and specificity of final diagnosis based on clinical and radiological evaluation, the sensitivity and specificity of the assay were 72.9% and 98.7%, respectively, which were significantly higher than those for smear microscopy. The Xpert MTB/RIF assay on bronchoalveolar lavage fluid could serve as an additional rapid diagnostic tool for PTB in a high TB-burden country and improve the time to TB treatment initiation in patients with PTB. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".
Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo
2004-03-01
Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.
Predict the fatigue life of crack based on extended finite element method and SVR
NASA Astrophysics Data System (ADS)
Song, Weizhen; Jiang, Zhansi; Jiang, Hui
2018-05-01
Using extended finite element method (XFEM) and support vector regression (SVR) to predict the fatigue life of plate crack. Firstly, the XFEM is employed to calculate the stress intensity factors (SIFs) with given crack sizes. Then predicetion model can be built based on the function relationship of the SIFs with the fatigue life or crack length. Finally, according to the prediction model predict the SIFs at different crack sizes or different cycles. Because of the accuracy of the forward Euler method only ensured by the small step size, a new prediction method is presented to resolve the issue. The numerical examples were studied to demonstrate the proposed method allow a larger step size and have a high accuracy.
NASA Astrophysics Data System (ADS)
Tao, Gang; Wei, Guohua; Wang, Xu; Kong, Ming
2018-03-01
There has been increased interest over several decades for applying ground-based synthetic aperture radar (GB-SAR) for monitoring terrain displacement. GB-SAR can achieve multitemporal surface deformation maps of the entire terrain with high spatial resolution and submilimetric accuracy due to the ability of continuous monitoring a certain area day and night regardless of the weather condition. The accuracy of the interferometric measurement result is very important. In this paper, the basic principle of InSAR is expounded, the influence of the platform's instability on the interferometric measurement results are analyzed. The error sources of deformation detection estimation are analyzed using precise geometry of imaging model. Finally, simulation results demonstrates the validity of our analysis.
Elicitation of neurological knowledge with argument-based machine learning.
Groznik, Vida; Guid, Matej; Sadikov, Aleksander; Možina, Martin; Georgiev, Dejan; Kragelj, Veronika; Ribarič, Samo; Pirtošek, Zvezdan; Bratko, Ivan
2013-02-01
The paper describes the use of expert's knowledge in practice and the efficiency of a recently developed technique called argument-based machine learning (ABML) in the knowledge elicitation process. We are developing a neurological decision support system to help the neurologists differentiate between three types of tremors: Parkinsonian, essential, and mixed tremor (comorbidity). The system is intended to act as a second opinion for the neurologists, and most importantly to help them reduce the number of patients in the "gray area" that require a very costly further examination (DaTSCAN). We strive to elicit comprehensible and medically meaningful knowledge in such a way that it does not come at the cost of diagnostic accuracy. To alleviate the difficult problem of knowledge elicitation from data and domain experts, we used ABML. ABML guides the expert to explain critical special cases which cannot be handled automatically by machine learning. This very efficiently reduces the expert's workload, and combines expert's knowledge with learning data. 122 patients were enrolled into the study. The classification accuracy of the final model was 91%. Equally important, the initial and the final models were also evaluated for their comprehensibility by the neurologists. All 13 rules of the final model were deemed as appropriate to be able to support its decisions with good explanations. The paper demonstrates ABML's advantage in combining machine learning and expert knowledge. The accuracy of the system is very high with respect to the current state-of-the-art in clinical practice, and the system's knowledge base is assessed to be very consistent from a medical point of view. This opens up the possibility to use the system also as a teaching tool. Copyright © 2012 Elsevier B.V. All rights reserved.
Ud Din, Nasir; Memon, Aisha; Idress, Romana; Ahmad, Zubair; Hasan, Sheema
2011-01-01
Intraoperative consultation of CNS lesions provides accurate diagnosis to neurosurgeons. Some lesions, however, may cause diagnostic difficulty. In this study accuracy of intraoperative consultations of CNS lesions and discrepancies in diagnosis and deferrals were analysed. All CNS cases from May 1, 2004 to September 20, 2010 in which intraoperative frozen section had been performed, and which were reported in the Section of Histopathology, Aga Khan University Hospital, Karachi Pakistan were retrieved. The diagnoses given on FS were compared with the final diagnosis given on permanent sections (and additional material if received), as indicated in the frozen section and final pathology report. During the study period, 171 CNS cases were received for intraoperative consultation. In all cases, cryostat sections (FS) plus cytology smears were prepared. The ages of the patients ranged from 03 to 77 years. 106 were males and 65 were females. Out of these 171 cases, 160 cases (94.1 %) were concordant, 10 cases (5.8 %) were discrepant, and one case was deferred until permanent sections. The diagnostic accuracy of frozen section was 88.9%. The sensitivity and specificity were 94.8% and 87.5% respectively. The positive predictive value was 98.6% and negative predictive value was 63.6%. All our cases in which intraoperative consultation was requested were sent for primary diagnosis. Adequacy per se was not a criterion for sending cases for intraoperative consultation. Our results show a reasonably high percentage of accuracy in the intraoperative diagnosis of CNS lesions. However, there are limitations and some lesions pose a diagnostic challenge. There is a need to improve our own diagnostic skills and establish better communication with neurosurgeons.
Dexheimer, Felippe Leopoldo; de Andrade, Juliana Mara Stormovski; Raupp, Ana Carolina Tabajara; Townsend, Raquel da Silva; Beltrami, Fabiana Gabe; Brisson, Hélène; Lu, Qin; Dalcin, Paulo de Tarso Roth
2015-01-01
Objective: Bedside lung ultrasound (LUS) is a noninvasive, readily available imaging modality that can complement clinical evaluation. The Bedside Lung Ultrasound in Emergency (BLUE) protocol has demonstrated a high diagnostic accuracy in patients with acute respiratory failure (ARF). Recently, bedside LUS has been added to the medical training program of our ICU. The aim of this study was to investigate the accuracy of LUS based on the BLUE protocol, when performed by physicians who are not ultrasound experts, to guide the diagnosis of ARF. Methods: Over a one-year period, all spontaneously breathing adult patients consecutively admitted to the ICU for ARF were prospectively included. After training, 4 non-ultrasound experts performed LUS within 20 minutes of patient admission. They were blinded to patient medical history. LUS diagnosis was compared with the final clinical diagnosis made by the ICU team before patients were discharged from the ICU (gold standard). Results: Thirty-seven patients were included in the analysis (mean age, 73.2 ± 14.7 years; APACHE II, 19.2 ± 7.3). LUS diagnosis had a good agreement with the final diagnosis in 84% of patients (overall kappa, 0.81). The most common etiologies for ARF were pneumonia (n = 17) and hemodynamic lung edema (n = 15). The sensitivity and specificity of LUS as measured against the final diagnosis were, respectively, 88% and 90% for pneumonia and 86% and 87% for hemodynamic lung edema. Conclusions: LUS based on the BLUE protocol was reproducible by physicians who are not ultrasound experts and accurate for the diagnosis of pneumonia and hemodynamic lung edema. PMID:25750675
Extreme ultraviolet interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Kenneth A.
EUV lithography is a promising and viable candidate for circuit fabrication with 0.1-micron critical dimension and smaller. In order to achieve diffraction-limited performance, all-reflective multilayer-coated lithographic imaging systems operating near 13-nm wavelength and 0.1 NA have system wavefront tolerances of 0.27 nm, or 0.02 waves RMS. Owing to the highly-sensitive resonant reflective properties of multilayer mirrors and extraordinarily tight tolerances set forth for their fabrication, EUV optical systems require at-wavelength EUV interferometry for final alignment and qualification. This dissertation discusses the development and successful implementation of high-accuracy EUV interferometric techniques. Proof-of-principle experiments with a prototype EUV point-diffraction interferometer for themore » measurement of Fresnel zoneplate lenses first demonstrated sub-wavelength EUV interferometric capability. These experiments spurred the development of the superior phase-shifting point-diffraction interferometer (PS/PDI), which has been implemented for the testing of an all-reflective lithographic-quality EUV optical system. Both systems rely on pinhole diffraction to produce spherical reference wavefronts in a common-path geometry. Extensive experiments demonstrate EUV wavefront-measuring precision beyond 0.02 waves RMS. EUV imaging experiments provide verification of the high-accuracy of the point-diffraction principle, and demonstrate the utility of the measurements in successfully predicting imaging performance. Complementary to the experimental research, several areas of theoretical investigation related to the novel PS/PDI system are presented. First-principles electromagnetic field simulations of pinhole diffraction are conducted to ascertain the upper limits of measurement accuracy and to guide selection of the pinhole diameter. Investigations of the relative merits of different PS/PDI configurations accompany a general study of the most significant sources of systematic measurement errors. To overcome a variety of experimental difficulties, several new methods in interferogram analysis and phase-retrieval were developed: the Fourier-Transform Method of Phase-Shift Determination, which uses Fourier-domain analysis to improve the accuracy of phase-shifting interferometry; the Fourier-Transform Guided Unwrap Method, which was developed to overcome difficulties associated with a high density of mid-spatial-frequency blemishes and which uses a low-spatial-frequency approximation to the measured wavefront to guide the phase unwrapping in the presence of noise; and, finally, an expedient method of Gram-Schmidt orthogonalization which facilitates polynomial basis transformations in wave-front surface fitting procedures.« less
Price, Owen F; Penman, Trent; Bradstock, Ross; Borah, Rittick
2016-10-01
Wildfires are complex adaptive systems, and have been hypothesized to exhibit scale-dependent transitions in the drivers of fire spread. Among other things, this makes the prediction of final fire size from conditions at the ignition difficult. We test this hypothesis by conducting a multi-scale statistical modelling of the factors determining whether fires reached 10 ha, then 100 ha then 1000 ha and the final size of fires >1000 ha. At each stage, the predictors were measures of weather, fuels, topography and fire suppression. The objectives were to identify differences among the models indicative of scale transitions, assess the accuracy of the multi-step method for predicting fire size (compared to predicting final size from initial conditions) and to quantify the importance of the predictors. The data were 1116 fires that occurred in the eucalypt forests of New South Wales between 1985 and 2010. The models were similar at the different scales, though there were subtle differences. For example, the presence of roads affected whether fires reached 10 ha but not larger scales. Weather was the most important predictor overall, though fuel load, topography and ease of suppression all showed effects. Overall, there was no evidence that fires have scale-dependent transitions in behaviour. The models had a predictive accuracy of 73%, 66%, 72% and 53% accuracy at 10 ha, 100 ha, 1000 ha and final size scales. When these steps were combined, the overall accuracy for predicting the size of fires was 62%, while the accuracy of the one step model was only 20%. Thus, the multi-scale approach was an improvement on the single scale approach, even though the predictive accuracy was probably insufficient for use as an operational tool. The analysis has also provided further evidence of the important role of weather, compared to fuel, suppression and topography in driving fire behaviour. Copyright © 2016. Published by Elsevier Ltd.
Warping an atlas derived from serial histology to 5 high-resolution MRIs.
Tullo, Stephanie; Devenyi, Gabriel A; Patel, Raihaan; Park, Min Tae M; Collins, D Louis; Chakravarty, M Mallar
2018-06-19
Previous work from our group demonstrated the use of multiple input atlases to a modified multi-atlas framework (MAGeT-Brain) to improve subject-based segmentation accuracy. Currently, segmentation of the striatum, globus pallidus and thalamus are generated from a single high-resolution and -contrast MRI atlas derived from annotated serial histological sections. Here, we warp this atlas to five high-resolution MRI templates to create five de novo atlases. The overall goal of this work is to use these newly warped atlases as input to MAGeT-Brain in an effort to consolidate and improve the workflow presented in previous manuscripts from our group, allowing for simultaneous multi-structure segmentation. The work presented details the methodology used for the creation of the atlases using a technique previously proposed, where atlas labels are modified to mimic the intensity and contrast profile of MRI to facilitate atlas-to-template nonlinear transformation estimation. Dice's Kappa metric was used to demonstrate high quality registration and segmentation accuracy of the atlases. The final atlases are available at https://github.com/CobraLab/atlases/tree/master/5-atlas-subcortical.
Measurements of Trace Gases Using a Tunable Diode Laser
NASA Technical Reports Server (NTRS)
Jost, Hans-Juerg
2005-01-01
This report is the final report for "Measurements of Trace Gases Using a Tunable Diode Laser." The tasks outlined in the proposal are listed below with a brief comment. The publications and the conference presentations are listed. Finally, the important publications are attached. The Cooperative Agreement made possible a research effort to produce high- precision and high-accuracy in-situ measurements of carbon monoxide, methane and nitrous oxide on the WB-57 during the CRYSTAL-FACE and pre-AVE field campaigns and to analyze these measurements. These measurements of CO and CH4 were of utmost importance to studies of the radiative effects of clouds. Some important results of the CRYSTAL-FACE program were contained in two scientific papers (attached). This Cooperative Agreement allowed the participation of the Argus instrument in the program and the analysis of the data.
Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos
NASA Astrophysics Data System (ADS)
Miao, X.; Xie, H.
2015-12-01
High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.
Austin, Peter C; Lee, Douglas S
2011-01-01
Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181
Renjith, Arokia; Manjula, P; Mohan Kumar, P
2015-01-01
Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.
Assessment of shrimp farming impact on groundwater quality using analytical hierarchy process
NASA Astrophysics Data System (ADS)
Anggie, Bernadietta; Subiyanto, Arief, Ulfah Mediaty; Djuniadi
2018-03-01
Improved shrimp farming affects the groundwater quality conditions. Assessment of shrimp farming impact on groundwater quality conventionally has less accuracy. This paper presents the implementation of Analytical Hierarchy Process (AHP) method for assessing shrimp farming impact on groundwater quality. The data used is the impact data of shrimp farming in one of the regions in Indonesia from 2006-2016. Criteria used in this study were 8 criteria and divided into 49 sub-criteria. The weighting by AHP performed to determine the importance level of criteria and sub-criteria. Final priority class of shrimp farming impact were obtained from the calculation of criteria's and sub-criteria's weights. The validation was done by comparing priority class of shrimp farming impact and water quality conditions. The result show that 50% of the total area was moderate priority class, 37% was low priority class and 13% was high priority class. From the validation result impact assessment for shrimp farming has been high accuracy to the groundwater quality conditions. This study shows that assessment based on AHP has a higher accuracy to shrimp farming impact and can be used as the basic fisheries planning to deal with impacts that have been generated.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
NASA Astrophysics Data System (ADS)
Becerra, Luis Omar
2009-01-01
This SIM comparison on the calibration of high accuracy hydrometers was carried out within fourteen laboratories in the density range from 600 kg/m3 to 1300 kg/m3 in order to evaluate the degree of equivalence among participant laboratories. This key comparison anticipates the planned key comparison CCM.D-K4, and is intended to be linked with CCM.D-K4 when results are available. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Monitoring techniques for high accuracy interference fit assembly processes
NASA Astrophysics Data System (ADS)
Liuti, A.; Vedugo, F. Rodriguez; Paone, N.; Ungaro, C.
2016-06-01
In the automotive industry, there are many assembly processes that require a high geometric accuracy, in the micrometer range; generally open-loop controllers cannot meet these requirements. This results in an increased defect rate and high production costs. This paper presents an experimental study of interference fit process, aimed to evaluate the aspects which have the most impact on the uncertainty in the final positioning. The press-fitting process considered, consists in a press machine operating with a piezoelectric actuator to press a plug into a sleeve. Plug and sleeve are designed and machined to obtain a known interference fit. Differential displacement and velocity measurements of the plug with respect to the sleeve are measured by a fiber optic differential laser Doppler vibrometer. Different driving signals of the piezo actuator allow to have an insight into the differences between a linear and a pulsating press action. The paper highlights how the press-fit assembly process is characterized by two main phases: the first is an elastic deformation of the plug and sleeve, which produces a reversible displacement, the second is a sliding of the plug with respect to the sleeve, which results in an irreversible displacement and finally realizes the assembly. The simultaneous measurements of the displacement and the force have permitted to define characteristic features in the signal useful to identify the start of the irreversible movement. These indicators could be used to develop a control logic in a press assembly process.
Deconvolution single shot multibox detector for supermarket commodity detection and classification
NASA Astrophysics Data System (ADS)
Li, Dejian; Li, Jian; Nie, Binling; Sun, Shouqian
2017-07-01
This paper proposes an image detection model to detect and classify supermarkets shelves' commodity. Based on the principle of the features directly affects the accuracy of the final classification, feature maps are performed to combine high level features with bottom level features. Then set some fixed anchors on those feature maps, finally the label and the position of commodity is generated by doing a box regression and classification. In this work, we proposed a model named Deconvolutiuon Single Shot MultiBox Detector, we evaluated the model using 300 images photographed from real supermarket shelves. Followed the same protocol in other recent methods, the results showed that our model outperformed other baseline methods.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Multi-wavelength approach towards on-product overlay accuracy and robustness
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John
2018-03-01
Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-14
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-01
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829
Mining HIV protease cleavage data using genetic programming with a sum-product function.
Yang, Zheng Rong; Dalby, Andrew R; Qiu, Jing
2004-12-12
In order to design effective HIV inhibitors, studying and understanding the mechanism of HIV protease cleavage specification is critical. Various methods have been developed to explore the specificity of HIV protease cleavage activity. However, success in both extracting discriminant rules and maintaining high prediction accuracy is still challenging. The earlier study had employed genetic programming with a min-max scoring function to extract discriminant rules with success. However, the decision will finally be degenerated to one residue making further improvement of the prediction accuracy difficult. The challenge of revising the min-max scoring function so as to improve the prediction accuracy motivated this study. This paper has designed a new scoring function called a sum-product function for extracting HIV protease cleavage discriminant rules using genetic programming methods. The experiments show that the new scoring function is superior to the min-max scoring function. The software package can be obtained by request to Dr Zheng Rong Yang.
NASA Astrophysics Data System (ADS)
Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O.
2017-05-01
This article presents a coupled system consisting of a single-frequency GPS receiver and a light photogrammetric quality camera embedded in an Unmanned Aerial Vehicle (UAV). The aim is to produce high quality data that can be used in metrology applications. The issue of Integrated Sensor Orientation (ISO) of camera poses using only GPS measurements is presented and discussed. The accuracy reached by our system based on sensors developed at the French Mapping Agency (IGN) Opto-Electronics, Instrumentation and Metrology Laboratory (LOEMI) is qualified. These sensors are specially designed for close-range aerial image acquisition with a UAV. Lever-arm calibration and time synchronization are explained and performed to reach maximum accuracy. All processing steps are detailed from data acquisition to quality control of final products. We show that an accuracy of a few centimeters can be reached with this system which uses low-cost UAV and GPS module coupled with the IGN-LOEMI home-made camera.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
Implementation of a close range photogrammetric system for 3D reconstruction of a scoliotic torso
NASA Astrophysics Data System (ADS)
Detchev, Ivan Denislavov
Scoliosis is a deformity of the human spine most commonly encountered with children. After being detected, periodic examinations via x-rays are traditionally used to measure its progression. However, due to the increased risk of cancer, a non-invasive and radiation-free scoliosis detection and progression monitoring methodology is needed. Quantifying the scoliotic deformity through the torso surface is a valid alternative, because of its high correlation with the internal spine curvature. This work proposes a low-cost multi-camera photogrammetric system for semi-automated 3D reconstruction of a torso surface with sub-millimetre level accuracy. The thesis describes the system design and calibration for optimal accuracy. It also covers the methodology behind the reconstruction and registration procedures. The experimental results include the complete reconstruction of a scoliotic torso mannequin. The final accuracy is evaluated through the goodness of fit between the reconstructed surface and a more accurate set of points measured by a coordinate measuring machine.
Recommendation in evolving online networks
NASA Astrophysics Data System (ADS)
Hu, Xiao; Zeng, An; Shang, Ming-Sheng
2016-02-01
Recommender system is an effective tool to find the most relevant information for online users. By analyzing the historical selection records of users, recommender system predicts the most likely future links in the user-item network and accordingly constructs a personalized recommendation list for each user. So far, the recommendation process is mostly investigated in static user-item networks. In this paper, we propose a model which allows us to examine the performance of the state-of-the-art recommendation algorithms in evolving networks. We find that the recommendation accuracy in general decreases with time if the evolution of the online network fully depends on the recommendation. Interestingly, some randomness in users' choice can significantly improve the long-term accuracy of the recommendation algorithm. When a hybrid recommendation algorithm is applied, we find that the optimal parameter gradually shifts towards the diversity-favoring recommendation algorithm, indicating that recommendation diversity is essential to keep a high long-term recommendation accuracy. Finally, we confirm our conclusions by studying the recommendation on networks with the real evolution data.
Development of an integrated sub-picometric SWIFTS-based wavelength meter
NASA Astrophysics Data System (ADS)
Duchemin, Céline; Thomas, Fabrice; Martin, Bruno; Morino, Eric; Puget, Renaud; Oliveres, Robin; Bonneville, Christophe; Gonthiez, Thierry; Valognes, Nicolas
2017-02-01
SWIFTSTM technology has been known for over five years to offer compact and high-resolution laser spectrum analyzers. The increase of wavelength monitoring demand with even better accuracy and resolution has pushed the development of a wavelength meter based on SWIFTSTM technology, named LW-10. As a reminder, SWIFTSTM principle consists in a waveguide in which a stationary wave is created, sampled and read out by a linear image sensor array. Due to its inherent properties (non-uniform subsampling) and aliasing signal (as presented in Shannon-Nyquist criterion), the system offers short spectral window bandwidths thus needs an a priori on the working wavelength and thermal monitoring. Although SWIFTSTM-based devices are barely sensitive to atmospheric pressure, temperature control is a key factor to master both high accuracy and wavelength meter resolution. Temperature control went from passive (temperature probing only) to active control (Peltier thermoelectric cooler) with milli-degree accuracy. The software part consists in dropping the Fourier-like transform, for a least-squares method directly on the interference pattern. Moreover, the consideration of the system's chromatic behavior provides a "signature" for automated wavelength detection and discrimination. This SWIFTSTM-based new device - LW-10 - shows outstanding results in terms of absolute accuracy, wavelength meter resolution as well as calibration robustness within a compact device, compared to other existing technologies. On the 630 - 1100 nm range, the final device configuration allows pulsed or CW lasers monitoring with 20 MHz resolution and 200 MHz absolute accuracy. Non-exhaustive applications include tunable laser control and frequency locking experiments
Low-Cost 3-D Flow Estimation of Blood With Clutter.
Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali
2017-05-01
Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.
Ramieri, Maria Teresa; Marandino, Ferdinando; Visca, Paolo; Salvitti, Tommaso; Gallo, Enzo; Casini, Beatrice; Giordano, Francesca Romana; Frigieri, Claudia; Caterino, Mauro; Carlini, Sandro; Rinaldi, Massimo; Ceribelli, Anna; Pennetti, Annarita; Alò, Pier Luigi; Marino, Mirella; Pescarmona, Edoardo; Filippetti, Massimo
2016-08-01
Conventional transbronchial needle aspiration (c-TBNA) contributed to improve the bronchoscopic examination, allowing to sample lesions located even outside the tracheo-bronchial tree and in the hilo-mediastinal district, both for diagnostic and staging purposes. We have evaluated the sensitivity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of the c-TBNA performed during the 2005-2015 period for suspicious lung neoplasia and/or hilar and mediastinal lymphadenopathy at the Thoracic endoscopy of the Thoracic Surgery Department of the Regina Elena National Cancer Institute, Rome. Data from 273 consecutive patients (205 males and 68 females) were analyzed. Among 158 (58%) adequate specimens, 112 (41%) were neoplastic or contained atypical cells, 46 (17%) were negative or not diagnostic. We considered in the analysis first the overall period; then we compared the findings of the first [2005-2011] and second period [2012-2015] and, finally, only those of adequate specimens. During the overall period, sensibility and accuracy values were respectively of 53% and 63%, in the first period they reached 41% and 53% respectively; in the second period sensibility and accuracy reached 60% and 68%. Considering only the adequate specimens, sensibility and accuracy during the overall period were respectively of 80% and 82%; the values obtained for the first period were 68% and 72%. Finally, in the second period, sensibility reached 86% and accuracy 89%. Carcinoma-subtyping was possible in 112 cases, adenocarcinomas being diagnosed in 50 cases; further, in 30 cases molecular predictive data could be obtained. The c-TBNA proved to be an efficient method for the diagnosis/staging of lung neoplasms and for the diagnosis of mediastinal lymphadenopathy. Endoscopist's skill and technical development, associated to thin-prep cytology and to a rapid on site examination (ROSE), were able to provide by c-TBNA a high diagnostic yield and molecular predictive data in advanced lung carcinomas.
Laufer, Shlomi; D'Angelo, Anne-Lise D; Kwan, Calvin; Ray, Rebbeca D; Yudkowsky, Rachel; Boulet, John R; McGaghie, William C; Pugh, Carla M
2017-12-01
Develop new performance evaluation standards for the clinical breast examination (CBE). There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong
2015-01-01
Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288
DOT National Transportation Integrated Search
2009-01-01
In 1992, Pickrell published a seminal piece examining the accuracy of ridership forecasts and capital cost estimates for fixed-guideway transit systems in the US. His research created heated discussions in the transit industry regarding the ability o...
Accuracy and repeatability positioning of high-performancel athe for non-circular turning
NASA Astrophysics Data System (ADS)
Majda, Paweł; Powałka, Bartosz
2017-11-01
This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.
NASA Astrophysics Data System (ADS)
Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe
1999-07-01
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.
Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin
2015-08-21
Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
Status of Core Products of the International GNSS Service
NASA Astrophysics Data System (ADS)
Choi, K. K.
2014-12-01
The International GNSS Service (IGS) has been providing high accuracy GNSS orbits, clocks and Earth Rotation Parameters (ERP) in three different time intervals. The quality of the IGS core products are regularly monitored since 2010, and the level of accuracies has not been changed noticeably. The Final and Rapid orbit's accuracies are known to be about ~2.5 cm and the near-real time (observed) Ultra-rapid orbit is about 3 cm. The real-time orbits obtained by propagating the Ultra-rapid orbits shows an accuracy of about 5 cm. The most significant error source of the real-time orbit is the sub-daily variation of the Earth orientation. Number of IGS08 core sites has been decreasing with the rate of ~0.13 stations per week due to equipment changes and natural disasters such as Earthquakes. This paper summarizes the quality state of the IGS core products for 2014, and the upcoming new official product IGV, Glonass Ultra-rapid orbit product which have been experimental for last 4 years. Eight IGS Analysis Centers (ACs) have completed their efforts to participate in the second reprocessing campaign (repro2). Based on their input, this paper summarizes the models and methodologies each AC have adopted for this campaign.
Wu, Guosheng; Robertson, Daniel H; Brooks, Charles L; Vieth, Michal
2003-10-01
The influence of various factors on the accuracy of protein-ligand docking is examined. The factors investigated include the role of a grid representation of protein-ligand interactions, the initial ligand conformation and orientation, the sampling rate of the energy hyper-surface, and the final minimization. A representative docking method is used to study these factors, namely, CDOCKER, a molecular dynamics (MD) simulated-annealing-based algorithm. A major emphasis in these studies is to compare the relative performance and accuracy of various grid-based approximations to explicit all-atom force field calculations. In these docking studies, the protein is kept rigid while the ligands are treated as fully flexible and a final minimization step is used to refine the docked poses. A docking success rate of 74% is observed when an explicit all-atom representation of the protein (full force field) is used, while a lower accuracy of 66-76% is observed for grid-based methods. All docking experiments considered a 41-member protein-ligand validation set. A significant improvement in accuracy (76 vs. 66%) for the grid-based docking is achieved if the explicit all-atom force field is used in a final minimization step to refine the docking poses. Statistical analysis shows that even lower-accuracy grid-based energy representations can be effectively used when followed with full force field minimization. The results of these grid-based protocols are statistically indistinguishable from the detailed atomic dockings and provide up to a sixfold reduction in computation time. For the test case examined here, improving the docking accuracy did not necessarily enhance the ability to estimate binding affinities using the docked structures. Copyright 2003 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.
An Application of the Quadrature-Free Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Lockard, David P.; Atkins, Harold L.
2000-01-01
The process of generating a block-structured mesh with the smoothness required for high-accuracy schemes is still a time-consuming process often measured in weeks or months. Unstructured grids about complex geometries are more easily generated, and for this reason, methods using unstructured grids have gained favor for aerodynamic analyses. The discontinuous Galerkin (DG) method is a compact finite-element projection method that provides a practical framework for the development of a high-order method using unstructured grids. Higher-order accuracy is obtained by representing the solution as a high-degree polynomial whose time evolution is governed by a local Galerkin projection. The traditional implementation of the discontinuous Galerkin uses quadrature for the evaluation of the integral projections and is prohibitively expensive. Atkins and Shu introduced the quadrature-free formulation in which the integrals are evaluated a-priori and exactly for a similarity element. The approach has been demonstrated to possess the accuracy required for acoustics even in cases where the grid is not smooth. Other issues such as boundary conditions and the treatment of non-linear fluxes have also been studied in earlier work This paper describes the application of the quadrature-free discontinuous Galerkin method to a two-dimensional shear layer problem. First, a brief description of the method is given. Next, the problem is described and the solution is presented. Finally, the resources required to perform the calculations are given.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Improving Speaking Accuracy through Awareness
ERIC Educational Resources Information Center
Dormer, Jan Edwards
2013-01-01
Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…
A new polishing process for large-aperture and high-precision aspheric surface
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Dai, Yifan; Song, Ci
2013-07-01
The high-precision aspheric surface is hard to be achieved due to the mid-spatial frequency error in the finishing step. The influence of mid-spatial frequency error is studied through the simulations and experiments. In this paper, a new polishing process based on magnetorheological finishing (MRF), smooth polishing (SP) and ion beam figuring (IBF) is proposed. A 400mm aperture parabolic surface is polished with this new process. The smooth polishing (SP) is applied after rough machining to control the MSF error. In the middle finishing step, most of low-spatial frequency error is removed by MRF rapidly, then the mid-spatial frequency error is restricted by SP, finally ion beam figuring is used to finish the surface. The surface accuracy is improved from the initial 37.691nm (rms, 95% aperture) to the final 4.195nm. The results show that the new polishing process is effective to manufacture large-aperture and high-precision aspheric surface.
Lidestam, Björn; Rönnberg, Jerker
2016-01-01
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Assessing participation in community-based physical activity programs in Brazil.
Reis, Rodrigo S; Yan, Yan; Parra, Diana C; Brownson, Ross C
2014-01-01
This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14-4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16-2.53), reporting a good health (OR = 1.58, 95% CI = 1.02-2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05-2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26-2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18-2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil.
The design of high precision temperature control system for InGaAs short-wave infrared detector
NASA Astrophysics Data System (ADS)
Wang, Zheng-yun; Hu, Yadong; Ni, Chen; Huang, Lin; Zhang, Aiwen; Sun, Xiao-bing; Hong, Jin
2018-02-01
The InGaAs Short-wave infrared detector is a temperature-sensitive device. Accurate temperature control can effectively reduce the background signal and improve detection accuracy, detection sensitivity, and the SNR of the detection system. Firstly, the relationship between temperature and detection background, NEP is analyzed, the principle of TEC and formula between cooling power, cooling current and hot-cold interface temperature difference are introduced. Then, the high precision constant current drive circuit based on triode voltage control current, and an incremental algorithm model based on deviation tracking compensation and PID control are proposed, which effectively suppresses the temperature overshoot, overcomes the temperature inertia, and has strong robustness. Finally, the detector and temperature control system are tested. Results show that: the lower of detector temperature, the smaller the temperature fluctuation, the higher the detection accuracy and the detection sensitivity. The temperature control system achieves the high temperature control with the temperature control rate is 7 8°C/min and the temperature fluctuation is better than +/-0. 04°C.
NASA Astrophysics Data System (ADS)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.
2018-03-01
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.
Ferry, Barbara; Gifu, Elena-Patricia; Sandu, Ioana; Denoroy, Luc; Parrot, Sandrine
2014-03-01
Electrochemical methods are very often used to detect catecholamine and indolamine neurotransmitters separated by conventional reverse-phase high performance liquid chromatography (HPLC). The present paper presents the development of a chromatographic method to detect monoamines present in low-volume brain dialysis samples using a capillary column filled with sub-2μm particles. Several parameters (repeatability, linearity, accuracy, limit of detection) for this new ultrahigh performance liquid chromatography (UHPLC) method with electrochemical detection were examined after optimization of the analytical conditions. Noradrenaline, adrenaline, serotonin, dopamine and its metabolite 3-methoxytyramine were separated in 1μL of injected sample volume; they were detected above concentrations of 0.5-1nmol/L, with 2.1-9.5% accuracy and intra-assay repeatability equal to or less than 6%. The final method was applied to very low volume dialysates from rat brain containing monoamine traces. The study demonstrates that capillary UHPLC with electrochemical detection is suitable for monitoring dialysate monoamines collected at high sampling rate. Copyright © 2014 Elsevier B.V. All rights reserved.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Schlösser, Magnus; Seitz, Hendrik; Rupp, Simone; Herwig, Philipp; Alecu, Catalin Gabriel; Sturm, Michael; Bornschein, Beate
2013-03-05
Highly accurate, in-line, and real-time composition measurements of gases are mandatory in many processing applications. The quantitative analysis of mixtures of hydrogen isotopologues (H2, D2, T2, HD, HT, and DT) is of high importance in such fields as DT fusion, neutrino mass measurements using tritium β-decay or photonuclear experiments where HD targets are used. Raman spectroscopy is a favorable method for these tasks. In this publication we present a method for the in-line calibration of Raman systems for the nonradioactive hydrogen isotopologues. It is based on precise volumetric gas mixing of the homonuclear species H2/D2 and a controlled catalytic production of the heteronuclear species HD. Systematic effects like spurious exchange reactions with wall materials and others are considered with care during the procedure. A detailed discussion of statistical and systematic uncertainties is presented which finally yields a calibration accuracy of better than 0.4%.
High precision predictions for exclusive VH production at the LHC
Li, Ye; Liu, Xiaohui
2014-06-04
We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less
Kobayashi, Tsuneo
2018-03-01
Diagnosis using a specific tumor marker is difficult because the sensitivity of this detection method is under 20%. Herein, a tumor marker combination assay, combining growth-related tumor marker and associated tumor marker (Cancer, 73(7), 1994), was employed. This double-blind tumor marker combination assay (TMCA) showed 87.5% sensitivity as the results, but a low specificity, ranging from 30 to 76%. To overcome this low specificity, we exploited complex markers, a multivariate analysis and serum fractionation by biochemical biopsy. Thus, in this study, a combination of new techniques was used to re-evaluate these serum samples. Three serum panels, containing 90, 120, and 97 samples were obtained from the Mayo Clinic. The final results showed 80-90% sensitivity, 84-85% specificity, and 83-88% accuracy. We demonstrated a notable tumor marker combination assay with high accuracy. This TMCA should be applicable for primary cancer detection and recurrence prevention. © 2018 The Author. Cancer Medicine published by John Wiley & Sons Ltd.
"Sturdy as a house with four windows," the star tracker of the future
NASA Astrophysics Data System (ADS)
Duivenvoorde, Tom; Leijtens, Johan; van der Heide, Erik J.
2017-11-01
Ongoing miniaturization of spacecraft demands the reduction in size of Attitude and Orbit Control Systems (AOCS). Therefore TNO has created a new design of a multi aperture, high performance, and miniaturized star tracker. The innovative design incorporates the latest developments in camera technology, attitude calculation and mechanical design into a system with 5 arc seconds accuracy, making the system usable for many applications. In this paper the results are presented of the system design and analysis, as well as the performance predictions for the Multi Aperture Baffled Star Tracker (MABS). The highly integrated system consists of multiple apertures without the need for external baffles, resulting in major advantages in mass, volume, alignment with the spacecraft and relative aperture stability. In the analysis part of this paper, the thermal and mechanical stability are discussed. In the final part the simulation results will be described that have lead to the predicted accuracy of the star tracker system and a peek into the future of attitude sensors is given.
Lin, You-Yu; Hsieh, Chia-Hung; Chen, Jiun-Hong; Lu, Xuemei; Kao, Jia-Horng; Chen, Pei-Jer; Chen, Ding-Shinn; Wang, Hurng-Yi
2017-04-26
The accuracy of metagenomic assembly is usually compromised by high levels of polymorphism due to divergent reads from the same genomic region recognized as different loci when sequenced and assembled together. A viral quasispecies is a group of abundant and diversified genetically related viruses found in a single carrier. Current mainstream assembly methods, such as Velvet and SOAPdenovo, were not originally intended for the assembly of such metagenomics data, and therefore demands for new methods to provide accurate and informative assembly results for metagenomic data. In this study, we present a hybrid method for assembling highly polymorphic data combining the partial de novo-reference assembly (PDR) strategy and the BLAST-based assembly pipeline (BBAP). The PDR strategy generates in situ reference sequences through de novo assembly of a randomly extracted partial data set which is subsequently used for the reference assembly for the full data set. BBAP employs a greedy algorithm to assemble polymorphic reads. We used 12 hepatitis B virus quasispecies NGS data sets from a previous study to assess and compare the performance of both PDR and BBAP. Analyses suggest the high polymorphism of a full metagenomic data set leads to fragmentized de novo assembly results, whereas the biased or limited representation of external reference sequences included fewer reads into the assembly with lower assembly accuracy and variation sensitivity. In comparison, the PDR generated in situ reference sequence incorporated more reads into the final PDR assembly of the full metagenomics data set along with greater accuracy and higher variation sensitivity. BBAP assembly results also suggest higher assembly efficiency and accuracy compared to other assembly methods. Additionally, BBAP assembly recovered HBV structural variants that were not observed amongst assembly results of other methods. Together, PDR/BBAP assembly results were significantly better than other compared methods. Both PDR and BBAP independently increased the assembly efficiency and accuracy of highly polymorphic data, and assembly performances were further improved when used together. BBAP also provides nucleotide frequency information. Together, PDR and BBAP provide powerful tools for metagenomic data studies.
Peatland classification of West Siberia based on Landsat imagery
NASA Astrophysics Data System (ADS)
Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.
2014-12-01
Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge-hollow and ridge-hollow-pool bog complexes prevail here occupying 34.5 Mha. They are followed by lakes (11.1 Mha), fens (10.7 Mha), pine-dwarf-shrub sphagnum bogs (9.3 Mha) and palsa complexes (7.4 Mha).
Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias
2016-03-01
Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.
Evaluation of targeting errors in ultrasound-assisted radiotherapy
Wang, Michael; Rohling, Robert; Duzenli, Cheryl; Clark, Brenda; Archip, Neculai
2014-01-01
A method for validating the start-to-end accuracy of a 3D ultrasound-based patient positioning system for radiotherapy is described. A radiosensitive polymer gel is used to record the actual dose delivered to a rigid phantom after being positioned using 3D ultrasound guidance. Comparison of the delivered dose with the treatment plan allows accuracy of the entire radiotherapy treatment process, from simulation to 3D ultrasound guidance, and finally delivery of radiation, to be evaluated. The 3D ultrasound patient positioning system has a number of features for achieving high accuracy and reducing operator dependence. These include using tracked 3D ultrasound scans of the target anatomy acquired using a dedicated 3D ultrasound probe during both the simulation and treatment sessions, automatic 3D ultrasound-to-ultrasound registration, and use of infra-red LED (IRED) markers of the optical position sensing system for registering simulation CT to ultrasound data. The mean target localization accuracy of this system was 2.5mm for four target locations inside the phantom, compared to 1.6mm obtained using the conventional patient positioning method of laser alignment. Since the phantom is rigid, this represents the best possible set-up accuracy of the system. Thus, these results suggest that 3D ultrasound-based target localization is practically feasible and potentially capable of increasing the accuracy of patient positioning for radiotherapy in sites where day-to-day organ shifts are greater than 1mm in magnitude. PMID:18723271
Yan, Liping; Chen, Benyong; Zhang, Enzheng; Zhang, Shihua; Yang, Ye
2015-08-01
A novel method for the precision measurement of refractive index of air (n(air)) based on the combining of the laser synthetic wavelength interferometry with the Edlén equation estimation is proposed. First, a n(air_e) is calculated from the modified Edlén equation according to environmental parameters measured by low precision sensors with an uncertainty of 10(-6). Second, a unique integral fringe number N corresponding to n(air) is determined based on the calculated n(air_e). Then, a fractional fringe ε corresponding to n(air) with high accuracy can be obtained according to the principle of fringe subdivision of laser synthetic wavelength interferometry. Finally, high accurate measurement of n(air) is achieved according to the determined fringes N and ε. The merit of the proposed method is that it not only solves the problem of the measurement accuracy of n(air) being limited by the accuracies of environmental sensors, but also avoids adopting complicated vacuum pumping to measure the integral fringe N in the method of conventional laser interferometry. To verify the feasibility of the proposed method, comparison experiments with Edlén equations in short time and in long time were performed. Experimental results show that the measurement accuracy of n(air) is better than 2.5 × 10(-8) in short time tests and 6.2 × 10(-8) in long time tests.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Alanazi, Mohammed R; Alamry, Ahmed; Al-Surimi, Khaled
One of the main purposes of healthcare organizations is to serve patients by providing safe and high-quality patient-centered care. Patients are considered the most appropriate source to assess the quality level of healthcare services. The objectives of this paper were to describe the translation and adaptation process of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey for Arabic speaking populations, examine the degree of equivalence between the original English version and the Arabic translated version, and estimate and report the validity and reliability of the translated Arabic HCAHPS version. The translation process had four main steps: (1) qualified bilingual translators translated the HCAHPS from English to Arabic; (2) the Arabic version was translated back to English and reviewed by experts to ensure content accuracy (content equivalence); (3) both Arabic and English versions were verified for accuracy and validity of the translation, checking for the similarities and differences (semantic equivalence); (4) finally, two independent bilinguals reviewed and made the final revision of both the Arabic and English versions separately and agreed on one final version that is similar and equivalent to the original English version in terms of content and meaning. The study findings showed that the overall Cronbach's α for the Arabic HCAHPS version was 0.90, showing good internal consistency across the 9 separate domains, which ranged from 0.70 to 0.97 Cronbach's α. The correlation coefficient between each statement for each separate domain revealed a highly positive significant correlation ranging from 0.72 to 0.89. The results of the study show empirical evidence of validity and reliability of HCAHPS in its Arabic version. Moreover, the Arabic version of HCAHPS in our study presented good internal consistency and it is highly recommended to be replicated and applied in the context of other Arab countries. Copyright © 2017. Published by Elsevier Ltd.
Deficient symbol processing in Alzheimer disease.
Toepper, Max; Steuwe, Carolin; Beblo, Thomas; Bauer, Eva; Boedeker, Sebastian; Thomas, Christine; Markowitsch, Hans J; Driessen, Martin; Sammer, Gebhard
2014-01-01
Symbols and signs have been suggested to improve the orientation of patients suffering from Alzheimer disease (AD). However, there are hardly any studies that confirm whether AD patients benefit from signs or symbols and which symbol characteristics might improve or impede their symbol comprehension. To address these issues, 30 AD patients and 30 matched healthy controls performed a symbol processing task (SPT) with 4 different item categories. A repeated-measures analysis of variance was run to identify impact of different item categories on performance accuracy in both the experimental groups. Moreover, SPT scores were correlated with neuropsychological test scores in a broad range of other cognitive domains. Finally, diagnostic accuracy of the SPT was calculated by a receiver-operating characteristic curve analysis. Results revealed a global symbol processing dysfunction in AD that was associated with semantic memory and executive deficits. Moreover, AD patients showed a disproportional performance decline at SPT items with visual distraction. Finally, the SPT total score showed high sensitivity and specificity in differentiating between AD patients and healthy controls. The present findings suggest that specific symbol features impede symbol processing in AD and argue for a diagnostic benefit of the SPT in neuropsychological assessment.
NASA Astrophysics Data System (ADS)
Morales, Abed; Quiroga, Aldo; Daued, Arturo; Cantero, Diana; Sequeira, Francisco; Castro, Luis Carlos; Becerra, Luis Omar; Salazar, Manuel; Vega, Maria
2017-01-01
A supplementary comparison was made between SIM laboratories concerning the calibration of four hydrometers within the range of 600 kg/m3 to 2000 kg/m3. The main objectives of the comparison were to evaluate the degree of equivalences SIM NMIs in the calibration of hydrometers of high accuracy. The participant NMIs were: CENAM, IBMETRO, INEN, INDECOPI, INM, INTN and LACOMET. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Morphology and spelling in French students with dyslexia: the case of silent final letters.
Quémart, Pauline; Casalis, Séverine
2017-04-01
Spelling is a challenge for individuals with dyslexia. Phoneme-to-grapheme correspondence rules are highly inconsistent in French, which make them very difficult to master, in particular for dyslexics. One recurrent manifestation of this inconsistency is the presence of silent letters at the end of words. Many of these silent letters perform a morphological function. The current study examined whether students with dyslexia (aged between 10 and 15 years) benefit from the morphological status of silent final letters when spelling. We compared, their ability to spell words with silent final letters that are either morphologically justified (e.g., tricot, "knit," where the final "t" is pronounced in morphologically related words such as tricoter, "to knit" and tricoteur "knitter") or not morphologically justified (e.g., effort, "effort") to that of a group of younger children matched for reading and spelling level. Results indicated that the dyslexic students' spelling of silent final letters was impaired in comparison to the control group. Interestingly, morphological status helped the dyslexics improve the accuracy of their choice of final letters, contrary to the control group. This finding provides new evidence of morphological processing in dyslexia during spelling.
NASA Astrophysics Data System (ADS)
Wiskin, James; Klock, John; Iuanow, Elaine; Borup, Dave T.; Terry, Robin; Malik, Bilal H.; Lenox, Mark
2017-03-01
There has been a great deal of research into ultrasound tomography for breast imaging over the past 35 years. Few successful attempts have been made to reconstruct high-resolution images using transmission ultrasound. To this end, advances have been made in 2D and 3D algorithms that utilize either time of arrival or full wave data to reconstruct images with high spatial and contrast resolution suitable for clinical interpretation. The highest resolution and quantitative accuracy result from inverse scattering applied to full wave data in 3D. However, this has been prohibitively computationally expensive, meaning that full inverse scattering ultrasound tomography has not been considered clinically viable. Here we show the results of applying a nonlinear inverse scattering algorithm to 3D data in a clinically useful time frame. This method yields Quantitative Transmission (QT) ultrasound images with high spatial and contrast resolution. We reconstruct sound speeds for various 2D and 3D phantoms and verify these values with independent measurements. The data are fully 3D as is the reconstruction algorithm, with no 2D approximations. We show that 2D reconstruction algorithms can introduce artifacts into the QT breast image which are avoided by using a full 3D algorithm and data. We show high resolution gross and microscopic anatomic correlations comparing cadaveric breast QT images with MRI to establish imaging capability and accuracy. Finally, we show reconstructions of data from volunteers, as well as an objective visual grading analysis to confirm clinical imaging capability and accuracy.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
The end-to-end simulator for the E-ELT HIRES high resolution spectrograph
NASA Astrophysics Data System (ADS)
Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca
2017-06-01
We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.
Effects of spatial frequency content on classification of face gender and expression.
Aguado, Luis; Serrano-Pedraza, Ignacio; Rodríguez, Sonia; Román, Francisco J
2010-11-01
The role of different spatial frequency bands on face gender and expression categorization was studied in three experiments. Accuracy and reaction time were measured for unfiltered, low-pass (cut-off frequency of 1 cycle/deg) and high-pass (cutoff frequency of 3 cycles/deg) filtered faces. Filtered and unfiltered faces were equated in root-mean-squared contrast. For low-pass filtered faces reaction times were higher than unfiltered and high-pass filtered faces in both categorization tasks. In the expression task, these results were obtained with expressive faces presented in isolation (Experiment 1) and also with neutral-expressive dynamic sequences where each expressive face was preceded by a briefly presented neutral version of the same face (Experiment 2). For high-pass filtered faces different effects were observed on gender and expression categorization. While both speed and accuracy of gender categorization were reduced comparing to unfiltered faces, the efficiency of expression classification remained similar. Finally, we found no differences between expressive and non expressive faces in the effects of spatial frequency filtering on gender categorization (Experiment 3). These results show a common role of information from the high spatial frequency band in the categorization of face gender and expression.
Research on precision grinding technology of large scale and ultra thin optics
NASA Astrophysics Data System (ADS)
Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua
2018-03-01
The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.
Development and calibration of an accurate 6-degree-of-freedom measurement system with total station
NASA Astrophysics Data System (ADS)
Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui
2016-12-01
To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.
Lee, Seungeun; Yamamoto, Naomichi
2015-12-01
This study characterized the accuracy of high-throughput amplicon sequencing to identify species within the genus Aspergillus. To this end, we sequenced the internal transcribed spacer 1 (ITS1), β-tubulin (BenA), and calmodulin (CaM) gene encoding sequences as DNA markers from eight reference Aspergillus strains with known identities using 300-bp sequencing on the Illumina MiSeq platform, and compared them with the BLASTn outputs. The identifications with the sequences longer than 250 bp were accurate at the section rank, with some ambiguities observed at the species rank due to mostly cross detection of sibling species. Additionally, in silico analysis was performed to predict the identification accuracy for all species in the genus Aspergillus, where 107, 210, and 187 species were predicted to be identifiable down to the species rank based on ITS1, BenA, and CaM, respectively. Finally, air filter samples were analysed to quantify the relative abundances of Aspergillus species in outdoor air. The results were reproducible across biological duplicates both at the species and section ranks, but not strongly correlated between ITS1 and BenA, suggesting the Aspergillus detection can be taxonomically biased depending on the selection of the DNA markers and/or primers. Copyright © 2015 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
PHASE QUANTIZATION STUDY OF SPATIAL LIGHT MODULATOR FOR EXTREME HIGH-CONTRAST IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Jiangpei; Ren, Deqing, E-mail: jpdou@niaot.ac.cn, E-mail: jiangpeidou@gmail.com
2016-11-20
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimizationmore » algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10{sup -10}. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10{sup -10} in comparison to that by using a deformable mirror.« less
Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing
2016-11-01
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
Liu, Yufei; Du, Zhebin; Zhang, Jin; Jiang, Haowen
2017-05-30
The accuracy of renal mass biopsy to diagnose malignancy can be affected by multiple factors. Here, we investigated the feasibility of Raman spectroscopy to distinguish malignant and benign renal tumors using biopsy specimens. Samples were collected from 63 patients who received radical or partial nephrectomy, mass suspicious of cancer and distal parenchyma were obtained from resected kidney using an 18-gauge biopsy needle. Four Raman spectra were obtained for each sample, and Discriminant Analysis was applied for data analysis. A total of 383 Raman spectra were eventually gathered and each type of tumor had its characteristic spectrum. Raman could separate tumoral and normal tissues with an accuracy of 82.53%, and distinguish malignant and benign tumors with a sensitivity of 91.79% and specificity of 71.15%. It could classify low-grade and high-grade tumors with an accuracy of 86.98%. Besides, clear cell renal carcinoma was differentiated with oncocytoma and angiomyolipoma with accuracy of 100% and 89.25%, respectively. And histological subtypes of cell carcinoma were distinguished with an accuracy of 93.48%. When compared with final pathology and biopsy, Raman spectroscopy was able to correctly identify 7 of 11 "missed" biopsy diagnoses. These results suggested that Raman may serve as a promising non-invasive approach in the future for pre-operative diagnosis.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
Accuracy Analysis of a Wireless Indoor Positioning System Using Geodetic Methods
NASA Astrophysics Data System (ADS)
Wagner, Przemysław; Woźniak, Marek; Odziemczyk, Waldemar; Pakuła, Dariusz
2017-12-01
Ubisense RTLS is one of the Indoor positioning systems using an Ultra Wide Band. AOA and TDOA methods are used as a principle of positioning. The accuracy of positioning depends primarily on the accuracy of determined angles and distance differences. The paper presents the results of accuracy research which includes a theoretical accuracy prediction and a practical test. Theoretical accuracy was calculated for two variants of system components geometry, assuming the parameters declared by the system manufacturer. Total station measurements were taken as a reference during the practical test. The results of the analysis are presented in a graphical form. A sample implementation (MagMaster) developed by Globema is presented in the final part of the paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sittler, O.D.; Agogino, M.M.
1979-05-01
This project was undertaken to improve the data base for estimating solar energy influx in eastern New Mexico. A precision pyranometer station has been established at Eastern New Mexico University in Portales. A program of careful calibration and data management procedures is conducted to maintain high standards of precision and accuracy. Data from the first year of operation were used to upgrade insolation data of moderate accuracy which had been obtained at this site with an inexpensive pyranograph. Although not as accurate as the data expected from future years of operation of this station, these upgraded pyranograph measurements show thatmore » eastern New Mexico receives somewhat less solar energy than would be expected from published data. A detailed summary of these upgraded insolation data is included.« less
Optically powered oil tank multichannel detection system with optical fiber link
NASA Astrophysics Data System (ADS)
Yu, Zhijing
1998-08-01
A novel oil tanks integrative parameters measuring system with optically powered are presented. To realize optical powered and micro-power consumption multiple channels and parameters detection, the system has taken the PWM/PPM modulation, ratio measurement, time division multiplexing and pulse width division multiplexing techniques. Moreover, the system also used special pulse width discriminator and single-chip microcomputer to accomplish signal pulse separation, PPM/PWM signal demodulation, the error correction of overlapping pulse and data processing. This new transducer has provided with high characteristics: experimental transmitting distance is 500m; total consumption of the probes is less than 150 (mu) W; measurement error: +/- 0.5 degrees C and +/- 0.2 percent FS. The measurement accuracy of the liquid level and reserves is mainly determined by the pressure accuracy. Finally, some points of the experiment are given.
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers
Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572
Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.
He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang
2017-01-01
The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.
Zhao, Hua; Wang, Xiaoting; Liu, Dawei; Zhang, Hongmin; He, Huaiwu; Long, Yun
2015-12-15
To evaluate the diagnostic value and potential therapeutic impact of Peking Union Medical College Hospital critical ultrasonic management (PCUM) in the early management of critically ill patients with acute respiratory failure (ARF). Patients admitted into the ICU of Peking Union Medical College Hospital for ARF were consecutively recruited over a 18-month period. Patients were randomly divided into conventional group and PCUM group (critical care ultrasonic examination was added in addition to conventional examinations). The two groups were compared with respect to time to preliminary diagnosis, time to final diagnosis, diagnostic accuracy, time to treatment response, time to other examination. A total of 187 patients were included in this study. The two groups showed no significant differences in general clinical information or final diagnosis (P > 0.05). The PCUM group had a shorter time to preliminary diagnosis, time to final diagnosis, time to treatment response, time to X-ray/CT examination, and a higher diagnostic accuracy than the conventional group (P < 0.001). PCUM had high sensitivity and specificity for the diagnosis of acute respiratory distress syndrome (ARDS) (sensitivity 92.0%, specificity 98.5%), acute pulmonary edema (sensitivity 94.7%, specificity 96.1%), pulmonary consolidation (sensitivity 85.7%, specificity 98.6%), COPD/asthma (sensitivity 84.2%, specificity 98.7%). The PCUM is seem to be an attractive complementary diagnostic tool and able to contribute to an early therapeutic decision for the patients with ARF.
High-precision processing and detection of the high-caliber off-axis aspheric mirror
NASA Astrophysics Data System (ADS)
Dai, Chen; Li, Ang; Xu, Lingdi; Zhang, Yingjie
2017-10-01
To achieve the efficient, controllable, digital processing and high-precision detection of the high-caliber off-axis aspheric mirror, meeting the high-level development needs of the modern high-resolution, large field of space optical remote sensing camera, we carried out the research on high precision machining and testing technology of off-axis aspheric mirror. First, we forming the off-axis aspheric sample with diameter of 574mm × 302mm by milling it with milling machine, and then the intelligent robot equipment was used for off-axis aspheric high precision polishing. Surface detection of the sample will be proceed with the off-axis aspheric contact contour detection technology and offaxis non-spherical surface interference detection technology after its fine polishing using ion beam equipment. The final surface accuracy RMS is 12nm.
Clinical evaluation of the FreeStyle Precision Pro system.
Brazg, Ronald; Hughes, Kristen; Martin, Pamela; Coard, Julie; Toffaletti, John; McDonnell, Elizabeth; Taylor, Elizabeth; Farrell, Lausanne; Patel, Mona; Ward, Jeanne; Chen, Ting; Alva, Shridhara; Ng, Ronald
2013-06-05
A new version of international standard (ISO 15197) and CLSI Guideline (POCT12) with more stringent accuracy criteria are near publication. We evaluated the glucose test performance of the FreeStyle Precision Pro system, a new blood glucose monitoring system (BGMS) designed to enhance accuracy for point-of-care testing (POCT). Precision, interference and system accuracy with 503 blood samples from capillary, venous and arterial sources were evaluated in a multicenter study. Study results were analyzed and presented in accordance with the specifications and recommendations of the final draft ISO 15197 and the new POCT12. The FreeStyle Precision Pro system demonstrated acceptable precision (CV <5%), no interference across a hematocrit range of 15-65%, and, except for xylose, no interference from 24 of 25 potentially interfering substances. It also met all accuracy criteria specified in the final draft ISO 15197 and POCT12, with 97.3-98.9% of the individual results of various blood sample types agreeing within ±12 mg/dl of the laboratory analyzer values at glucose concentrations <100mg/dl and within ±12.5% of the laboratory analyzer values at glucose concentrations ≥100 mg/dl. The FreeStyle Precision Pro system met the tighter accuracy requirements, providing a means for enhancing accuracy for point-of-care blood glucose monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2015-12-01
The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.
Fluency and belief bias in deductive reasoning: new indices for old effects
Trippas, Dries; Handley, Simon J.; Verde, Michael F.
2014-01-01
Models based on signal detection theory (SDT) have occupied a prominent role in domains such as perception, categorization, and memory. Recent work by Dube et al. (2010) suggests that the framework may also offer important insights in the domain of deductive reasoning. Belief bias in reasoning has traditionally been examined using indices based on raw endorsement rates—indices that critics have claimed are highly problematic. We discuss a new set of SDT indices fit for the investigation belief bias and apply them to new data examining the effect of perceptual disfluency on belief bias in syllogisms. In contrast to the traditional approach, the SDT indices do not violate important statistical assumptions, resulting in a decreased Type 1 error rate. Based on analyses using these novel indices we demonstrate that perceptual disfluency leads to decreased reasoning accuracy, contrary to predictions. Disfluency also appears to eliminate the typical link found between cognitive ability and the effect of beliefs on accuracy. Finally, replicating previous work, we demonstrate that cognitive ability leads to an increase in reasoning accuracy and a decrease in the response bias component of belief bias. PMID:25009515
Parkinson's disease detection based on dysphonia measurements
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.
Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks.
Dvornek, Nicha C; Ventola, Pamela; Pelphrey, Kevin A; Duncan, James S
2017-09-01
Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.
Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks
Dvornek, Nicha C.; Ventola, Pamela; Pelphrey, Kevin A.; Duncan, James S.
2017-01-01
Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD. PMID:29104967
DOE Office of Scientific and Technical Information (OSTI.GOV)
Issen, Kathleen
2017-06-05
This project employed a continuum approach to formulate an elastic constitutive model for Castlegate sandstone. The resulting constitutive framework for high porosity sandstone is thermodynamically sound, (i.e., does not violate the 1st and 2nd law of thermodynamics), represents known material constitutive response, and is able to be calibrated using available mechanical response data. To authenticate the accuracy of this model, a series of validation criteria were employed, using an existing mechanical response data set for Castlegate sandstone. The resulting constitutive framework is applicable to high porosity sandstones in general, and is tractable for scientists and researchers endeavoring to solve problemsmore » of practical interest.« less
Research on axisymmetric aspheric surface numerical design and manufacturing technology
NASA Astrophysics Data System (ADS)
Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng
2006-02-01
The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.
Study on photoelectric parameter measurement method of high capacitance solar cell
NASA Astrophysics Data System (ADS)
Zhang, Junchao; Xiong, Limin; Meng, Haifeng; He, Yingwei; Cai, Chuan; Zhang, Bifeng; Li, Xiaohui; Wang, Changshi
2018-01-01
The high efficiency solar cells usually have high capacitance characteristic, so the measurement of their photoelectric performance usually requires long pulse width and long sweep time. The effects of irradiance non-uniformity, probe shielding and spectral mismatch on the IV curve measurement are analyzed experimentally. A compensation method for irradiance loss caused by probe shielding is proposed, and the accurate measurement of the irradiance intensity in the IV curve measurement process of solar cell is realized. Based on the characteristics that the open circuit voltage of solar cell is sensitive to the junction temperature, an accurate measurement method of the temperature of solar cell under continuous irradiation condition is proposed. Finally, a measurement method with the characteristic of high accuracy and wide application range for high capacitance solar cell is presented.
Canopy Density Mapping on Ultracam-D Aerial Imagery in Zagros Woodlands, Iran
NASA Astrophysics Data System (ADS)
Erfanifard, Y.; Khodaee, Z.
2013-09-01
Canopy density maps express different characteristics of forest stands, especially in woodlands. Obtaining such maps by field measurements is so expensive and time-consuming. It seems necessary to find suitable techniques to produce these maps to be used in sustainable management of woodland ecosystems. In this research, a robust procedure was suggested to obtain these maps by very high spatial resolution aerial imagery. It was aimed to produce canopy density maps by UltraCam-D aerial imagery, newly taken in Zagros woodlands by Iran National Geographic Organization (NGO), in this study. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The very high spatial resolution aerial imagery of the plot purchased from NGO, was classified by kNN technique and the tree crowns were extracted precisely. The canopy density was determined in each cell of different meshes with different sizes overlaid on the study area map. The accuracy of the final maps was investigated by the ground truth obtained by complete field measurements. The results showed that the proposed method of obtaining canopy density maps was efficient enough in the study area. The final canopy density map obtained by a mesh with 30 Ar (3000 m2) cell size had 80% overall accuracy and 0.61 KHAT coefficient of agreement which shows a great agreement with the observed samples. This method can also be tested in other case studies to reveal its capability in canopy density map production in woodlands.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false How to dispute the accuracy of Healthcare... HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON HEALTH CARE PROVIDERS, SUPPLIERS AND PRACTITIONERS Disclosure of Information by the Healthcare...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false How to dispute the accuracy of Healthcare... HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON HEALTH CARE PROVIDERS, SUPPLIERS AND PRACTITIONERS Disclosure of Information by the Healthcare...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false How to dispute the accuracy of Healthcare... HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON HEALTH CARE PROVIDERS, SUPPLIERS AND PRACTITIONERS Disclosure of Information by the Healthcare...
Maheshwari, Amita; Gupta, Sudeep; Kane, Shubhada; Kulkarni, Yogesh; Goyal, Lt Col Bhupesh Kumar; Tongaonkar, Hemant B
2006-01-01
Background Epithelial ovarian neoplasms are an important cause of morbidity and mortality in women. The surgical management of ovarian neoplasms depends on their correct categorization as benign, borderline or malignant. This study was undertaken to evaluate the accuracy of intra-operative frozen section in the diagnosis of various categories of ovarian neoplasms. Methods Intraoperative frozen section diagnosis was retrospectively evaluated in 217 patients with suspected ovarian neoplasms who underwent surgery as primary line of therapy at our institution. This was compared with the final histopathologic diagnosis on paraffin sections. Results In 7 patients (3.2%) no opinion on frozen section was possible. In the remaining 210 patients frozen section report had a sensitivity of 100%, 93.5% and 45.5% for benign, malignant and borderline tumors. The corresponding specificities were 93.2%, 98.3% and 98.5% respectively. The overall accuracy of frozen section diagnosis was 91.2%. The majority of cases of disagreement were in the mucinous and borderline tumors. Conclusion Intraoperative frozen section has high accuracy in the diagnosis of suspected ovarian neoplasms. It is a valuable tool to guide the surgical management of these patients and should be routinely used in all major oncology centers. PMID:16504109
Accuracy of contrast-enhanced ultrasound in the detection of bladder cancer
Nicolau, C; Bunesch, L; Peri, L; Salvador, R; Corral, J M; Mallofre, C; Sebastia, C
2011-01-01
Objective To assess the accuracy contrast-enhanced ultrasound (CEUS) in bladder cancer detection using transurethral biopsy in conventional cystoscopy as the reference standard and to determine whether CEUS improves the bladder cancer detection rate of baseline ultrasound. Methods 43 patients with suspected bladder cancer underwent conventional cystoscopy with transurethral biopsy of the suspicious lesions. 64 bladder cancers were confirmed in 33 out of 43 patients. Baseline ultrasound and CEUS were performed the day before surgery and the accuracy of both techniques for bladder cancer detection and number of detected tumours were analysed and compared with the final diagnosis. Results CEUS was significantly more accurate than ultrasound in determining presence or absence of bladder cancer: 88.37% vs 72.09%. Seven of eight uncertain baseline ultrasound results were correctly diagnosed using CEUS. CEUS sensitivity was also better than that of baseline ultrasound per number of tumours: 65.62% vs 60.93%. CEUS sensitivity for bladder cancer detection was very high for tumours larger than 5 mm (94.7%) but very low for tumours <5 mm (20%) and also had a very low negative predictive value (28.57%) in tumours <5 mm. Conclusion CEUS provided higher accuracy than baseline ultrasound for bladder cancer detection, being especially useful in non-conclusive baseline ultrasound studies. PMID:21123306
Quality Analysis of Open Street Map Data
NASA Astrophysics Data System (ADS)
Wang, M.; Li, Q.; Hu, Q.; Zhou, M.
2013-05-01
Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.
NASA Astrophysics Data System (ADS)
Li, Yong-Fu; Xiao-Pei, Kou; Zheng, Tai-Xiong; Li, Yin-Guo
2015-05-01
In transportation cyber-physical-systems (T-CPS), vehicle-to-vehicle (V2V) communications play an important role in the coordination between individual vehicles as well as between vehicles and the roadside infrastructures, and engine cylinder pressure is significant for engine diagnosis on-line and torque control within the information exchange process under V2V communications. However, the parametric uncertainties caused from measurement noise in T-CPS lead to the dynamic performance deterioration of the engine cylinder pressure estimation. Considering the high accuracy requirement under V2V communications, a high gain observer based on the engine dynamic model is designed to improve the accuracy of pressure estimation. Then, the analyses about convergence, converge speed and stability of the corresponding error model are conducted using the Laplace and Lyapunov method. Finally, results from combination of Simulink with GT-Power based numerical experiments and comparisons demonstrate the effectiveness of the proposed approach with respect to robustness and accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 61304197), the Scientific and Technological Talents of Chongqing, China (Grant No. cstc2014kjrc-qnrc30002), the Key Project of Application and Development of Chongqing, China (Grant No. cstc2014yykfB40001), the Natural Science Funds of Chongqing, China (Grant No. cstc2014jcyjA60003), and the Doctoral Start-up Funds of Chongqing University of Posts and Telecommunications, China (Grant No. A2012-26).
NASA Astrophysics Data System (ADS)
Rasztovits, S.; Dorninger, P.
2013-07-01
Terrestrial Laser Scanning (TLS) is an established method to reconstruct the geometrical surface of given objects. Current systems allow for fast and efficient determination of 3D models with high accuracy and richness in detail. Alternatively, 3D reconstruction services are using images to reconstruct the surface of an object. While the instrumental expenses for laser scanning systems are high, upcoming free software services as well as open source software packages enable the generation of 3D models using digital consumer cameras. In addition, processing TLS data still requires an experienced user while recent web-services operate completely automatically. An indisputable advantage of image based 3D modeling is its implicit capability for model texturing. However, the achievable accuracy and resolution of the 3D models is lower than those of laser scanning data. Within this contribution, we investigate the results of automated web-services for image based 3D model generation with respect to a TLS reference model. For this, a copper sculpture was acquired using a laser scanner and using image series of different digital cameras. Two different webservices, namely Arc3D and AutoDesk 123D Catch were used to process the image data. The geometric accuracy was compared for the entire model and for some highly structured details. The results are presented and interpreted based on difference models. Finally, an economical comparison of the generation of the models is given considering the interactive and processing time costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less
NASA Astrophysics Data System (ADS)
Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang
2016-09-01
Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2017-09-28
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less
Assessing Participation in Community-Based Physical Activity Programs in Brazil
REIS, RODRIGO S.; YAN, YAN; PARRA, DIANA C.; BROWNSON, ROSS C.
2015-01-01
Purpose This study aimed to develop and validate a risk prediction model to examine the characteristics that are associated with participation in community-based physical activity programs in Brazil. Methods We used pooled data from three surveys conducted from 2007 to 2009 in state capitals of Brazil with 6166 adults. A risk prediction model was built considering program participation as an outcome. The predictive accuracy of the model was quantified through discrimination (C statistic) and calibration (Brier score) properties. Bootstrapping methods were used to validate the predictive accuracy of the final model. Results The final model showed sex (women: odds ratio [OR] = 3.18, 95% confidence interval [CI] = 2.14–4.71), having less than high school degree (OR = 1.71, 95% CI = 1.16–2.53), reporting a good health (OR = 1.58, 95% CI = 1.02–2.24) or very good/excellent health (OR = 1.62, 95% CI = 1.05–2.51), having any comorbidity (OR = 1.74, 95% CI = 1.26–2.39), and perceiving the environment as safe to walk at night (OR = 1.59, 95% CI = 1.18–2.15) as predictors of participation in physical activity programs. Accuracy indices were adequate (C index = 0.778, Brier score = 0.031) and similar to those obtained from bootstrapping (C index = 0.792, Brier score = 0.030). Conclusions Sociodemographic and health characteristics as well as perceptions of the environment are strong predictors of participation in community-based programs in selected cities of Brazil. PMID:23846162
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2016-06-17
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.
2016-12-01
Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.
Assessment of global precipitation measurement satellite products over Saudi Arabia
NASA Astrophysics Data System (ADS)
Mahmoud, Mohammed T.; Al-Zahrani, Muhammad A.; Sharif, Hatim O.
2018-04-01
Most hydrological analysis and modeling studies require reliable and accurate precipitation data for successful simulations. However, precipitation measurements should be more representative of the true precipitation distribution. Many approaches and techniques are used to collect precipitation data. Recently, hydrometeorological and climatological applications of satellite precipitation products have experienced a significant improvement with the emergence of the latest satellite products, namely, the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG) products, which can be utilized to estimate and analyze precipitation data. This study focuses on the validation of the IMERG early, late and final run rainfall products using ground-based rain gauge observations throughout Saudi Arabia for the period from October 2015 to April 2016. The accuracy of each IMERG product is assessed using six statistical performance measures to conduct three main evaluations, namely, regional, event-based and station-based evaluations. The results indicate that the early run product performed well in the middle and eastern parts as well as some of the western parts of the country; meanwhile, the satellite estimates for the other parts fluctuated between an overestimation and an underestimation. The late run product showed an improved accuracy over the southern and western parts; however, over the northern and middle parts, it showed relatively high errors. The final run product revealed significantly improved precipitation estimations and successfully obtained higher accuracies over most parts of the country. This study provides an early assessment of the performance of the GPM satellite products over the Middle East. The study findings can be used as a beneficial reference for the future development of the IMERG algorithms.
Calibration of mass and conventional mass of weights 2 kg, 1 kg, 200 g, 50 g, 1 g and 200 mg
NASA Astrophysics Data System (ADS)
Becerra, Luis Omar; Peña, Luis Manuel; Escalante Vargas, Boris; Cori Almonte, Luz; Martín Quiroga Rojas, Aldo; Bermúdez Coronel, Álvaro; Escobar Soto, Jhon J.; Naula, Wilson; Florencio, Arnaldo; Lourdes Valenzuela, María; Ramos Alfaro, Olman; Prenda Peña, Marcela
2018-01-01
This report describes the results of a supplementary comparison between SIM NMIs, which was carried out to evaluate the consistency of the measurements of calibration in high accuracy mass standards using the normalized error criteria (2 kg, 1 kg, 200 g, 50 g, 1 g and 200 mg). The supplementary comparison was carried out from April 2012 to July 2013. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
An integrated use of topography with RSI in gully mapping, Shandong Peninsula, China.
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed.
An Integrated Use of Topography with RSI in Gully Mapping, Shandong Peninsula, China
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed. PMID:25302333
3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis
NASA Astrophysics Data System (ADS)
Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin
2018-03-01
In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.
Fautrelle, L; Barbieri, G; Ballay, Y; Bonnetblanc, F
2011-10-27
The time required to complete a fast and accurate movement is a function of its amplitude and the target size. This phenomenon refers to the well known speed-accuracy trade-off. Some interpretations have suggested that the speed-accuracy trade-off is already integrated into the movement planning phase. More specifically, pointing movements may be planned to minimize the variance of the final hand position. However, goal-directed movements can be altered at any time, if for instance, the target location is changed during execution. Thus, one possible limitation of these interpretations may be that they underestimate feedback processes. To further investigate this hypothesis we designed an experiment in which the speed-accuracy trade-off was unexpectedly varied at the hand movement onset by modifying separately the target distance or size, or by modifying both of them simultaneously. These pointing movements were executed from an upright standing position. Our main results showed that the movement time increased when there was a change to the size or location of the target. In addition, the terminal variability of finger position did not change. In other words, it showed that the movement velocity is modulated according to the target size and distance during motor programming or during the final approach, independently of the final variability of the hand position. It suggests that when the speed-accuracy trade-off is unexpectedly modified, terminal feedbacks based on intermediate representations of the endpoint velocity are used to monitor and control the hand displacement. There is clearly no obvious perception-action coupling in this case but rather intermediate processing that may be involved. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks
Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh
2017-01-01
In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950
Magnetic resonance imaging-ultrasound fusion biopsy for prediction of final prostate pathology.
Le, Jesse D; Stephenson, Samuel; Brugger, Michelle; Lu, David Y; Lieu, Patricia; Sonn, Geoffrey A; Natarajan, Shyam; Dorey, Frederick J; Huang, Jiaoti; Margolis, Daniel J A; Reiter, Robert E; Marks, Leonard S
2014-11-01
We explored the impact of magnetic resonance imaging-ultrasound fusion prostate biopsy on the prediction of final surgical pathology. A total of 54 consecutive men undergoing radical prostatectomy at UCLA after fusion biopsy were included in this prospective, institutional review board approved pilot study. Using magnetic resonance imaging-ultrasound fusion, tissue was obtained from a 12-point systematic grid (mapping biopsy) and from regions of interest detected by multiparametric magnetic resonance imaging (targeted biopsy). A single radiologist read all magnetic resonance imaging, and a single pathologist independently rereviewed all biopsy and whole mount pathology, blinded to prior interpretation and matched specimen. Gleason score concordance between biopsy and prostatectomy was the primary end point. Mean patient age was 62 years and median prostate specific antigen was 6.2 ng/ml. Final Gleason score at prostatectomy was 6 (13%), 7 (70%) and 8-9 (17%). A tertiary pattern was detected in 17 (31%) men. Of 45 high suspicion (image grade 4-5) magnetic resonance imaging targets 32 (71%) contained prostate cancer. The per core cancer detection rate was 20% by systematic mapping biopsy and 42% by targeted biopsy. The highest Gleason pattern at prostatectomy was detected by systematic mapping biopsy in 54%, targeted biopsy in 54% and a combination in 81% of cases. Overall 17% of cases were upgraded from fusion biopsy to final pathology and 1 (2%) was downgraded. The combination of targeted biopsy and systematic mapping biopsy was needed to obtain the best predictive accuracy. In this pilot study magnetic resonance imaging-ultrasound fusion biopsy allowed for the prediction of final prostate pathology with greater accuracy than that reported previously using conventional methods (81% vs 40% to 65%). If confirmed, these results will have important clinical implications. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan Hesthaven
2012-02-06
Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted themore » project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.« less
Zhang, Ke; Zhang, Honglin; Wang, Ying; Tian, Yanqing; Zhao, Jiupeng; Li, Yao
2017-01-05
Fluorinated acrylate polymer has received great interest in recent years due to its extraordinary characteristics such as high oxygen permeability, good stability, low surface energy and refractive index. In this work, platinum octaethylporphyrin/poly(methylmethacrylate-co-trifluoroethyl methacrylate) (PtOEP/poly(MMA-co-TFEMA)) oxygen sensing film was prepared by the immobilizing of PtOEP in a poly(MMA-co-TFEMA) matrix and the technological readiness of optical properties was established based on the principle of luminescence quenching. It was found that the oxygen-sensing performance could be improved by optimizing the monomer ratio (MMA/TFEMA=1:1), tributylphosphate(TBP, 0.05mL) and PtOEP (5μg) content. Under this condition, the maximum quenching ratio I0/I100 of the oxygen sensing film is obtained to be about 8.16, Stern-Volmer equation is I0/I=1.003+2.663[O2] (R(2)=0.999), exhibiting a linear relationship, good photo-stability, high sensitivity and accuracy. Finally, the synthesized PtOEP/poly(MMA-co-TFEMA) sensing film was used for DO detection in different water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Zhang, Honglin; Wang, Ying; Tian, Yanqing; Zhao, Jiupeng; Li, Yao
2017-01-01
Fluorinated acrylate polymer has received great interest in recent years due to its extraordinary characteristics such as high oxygen permeability, good stability, low surface energy and refractive index. In this work, platinum octaethylporphyrin/poly(methylmethacrylate-co-trifluoroethyl methacrylate) (PtOEP/poly(MMA-co-TFEMA)) oxygen sensing film was prepared by the immobilizing of PtOEP in a poly(MMA-co-TFEMA) matrix and the technological readiness of optical properties was established based on the principle of luminescence quenching. It was found that the oxygen-sensing performance could be improved by optimizing the monomer ratio (MMA/TFEMA = 1:1), tributylphosphate(TBP, 0.05 mL) and PtOEP (5 μg) content. Under this condition, the maximum quenching ratio I0/I100 of the oxygen sensing film is obtained to be about 8.16, Stern-Volmer equation is I0/I = 1.003 + 2.663[O2] (R2 = 0.999), exhibiting a linear relationship, good photo-stability, high sensitivity and accuracy. Finally, the synthesized PtOEP/poly(MMA-co-TFEMA) sensing film was used for DO detection in different water samples.
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; ...
2017-12-07
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
Tracking accuracy assessment for concentrator photovoltaic systems
NASA Astrophysics Data System (ADS)
Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.
2010-10-01
The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.
NASA Astrophysics Data System (ADS)
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foucher, J.; Faurie, P.; Dourthe, L.
2011-11-10
The measurement accuracy is becoming one of the major components that have to be controlled in order to guarantee sufficient production yield. Already at the R and D level, we have to come up with the accurate measurements of sub-40 nm dense trenches and contact holes coming from 193 immersion lithography or E-Beam lithography. Current production CD (Critical Dimension) metrology techniques such as CD-SEM (CD-Scanning Electron Microscope) and OCD (Optical Critical Dimension) are limited in relative accuracy for various reasons (i.e electron proximity effect, outputs parameters correlation, stack influence, electron interaction with materials...). Therefore, time for R and D ismore » increasing, process windows degrade and finally production yield can decrease because you cannot manufactured correctly if you are unable to measure correctly. A new high volume manufacturing (HVM) CD metrology solution has to be found in order to improve the relative accuracy of production environment otherwise current CD Metrology solution will very soon get out of steam.In this paper, we will present a potential Hybrid CD metrology solution that smartly tuned 3D-AFM (3D-Atomic Force Microscope) and CD-SEM data in order to add accuracy both in R and D and production. The final goal for 'chip makers' is to improve yield and save R and D and production costs through real-time feedback loop implement on CD metrology routines. Such solution can be implemented and extended to any kind of CD metrology solution. In a 2{sup nd} part we will discuss and present results regarding a new AFM3D probes breakthrough with the introduction of full carbon tips made will E-Beam Deposition process. The goal is to overcome the current limitations of conventional flared silicon tips which are definitely not suitable for sub-32 nm nodes production.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang, E-mail: cyzhao@pmo.ac.cn
The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and themore » astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.« less
Initial Investigation of preclinical integrated SPECT and MR imaging.
Hamamura, Mark J; Ha, Seunghoon; Roeck, Werner W; Wagenaar, Douglas J; Meier, Dirk; Patt, Bradley E; Nalcioglu, Orhan
2010-02-01
Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source.
Initial Investigation of Preclinical Integrated SPECT and MR Imaging
Hamamura, Mark J.; Ha, Seunghoon; Roeck, Werner W.; Wagenaar, Douglas J.; Meier, Dirk; Patt, Bradley E.; Nalcioglu, Orhan
2014-01-01
Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source. PMID:20082527
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi
2018-06-05
Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Underwater photogrammetric theoretical equations and technique
NASA Astrophysics Data System (ADS)
Fan, Ya-bing; Huang, Guiping; Qin, Gui-qin; Chen, Zheng
2011-12-01
In order to have a high level of accuracy of measurement in underwater close-range photogrammetry, this article deals with a study of three varieties of model equations according to the way of imaging upon the water. First, the paper makes a careful analysis for the two varieties of theoretical equations and finds out that there are some serious limitations in practical application and has an in-depth study for the third model equation. Second, one special project for this measurement has designed correspondingly. Finally, one rigid antenna has been tested by underwater photogrammetry. The experimental results show that the precision of 3D coordinates measurement is 0.94mm, which validates the availability and operability in practical application with this third equation. It can satisfy the measurement requirements of refraction correction, improving levels of accuracy of underwater close-range photogrammetry, as well as strong antijamming and stabilization.
Schacht, M J; Toustrup, C B; Madsen, L B; Martiny, M S; Larsen, B B; Simonsen, J T
2016-10-01
Rapid on-site evaluation (ROSE) of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) followed by a subsequent preliminary adequacy assessment and a preliminary diagnosis, was performed at Aarhus University Hospital by biomedical scientists (BMS). The aim of this study was to evaluate the BMS accuracy of ROSE adequacy assessment, the preliminary adequacy assessment and the preliminary diagnosis as compared with the cytopathologist-rendered final adequacy assessment and final diagnosis. The BMS-rendered assessments for 717 sites from 319 consecutive patients over a 4-month period were compared with the cytopathologist-rendered assessments. Comparisons of adequacy and preliminary diagnoses were based on inter-observer Cohen's Kappa coefficient with a 95% confidence interval (CI). Strong correlations between ROSE and final adequacy assessments [Kappa coefficient of 0.90 (CI: 0.85-0.96)] and between the preliminary and final adequacy assessments [Kappa coefficient of 0.93 (CI: 0.87-0.99)] were found. As for the correlation between the preliminary and final diagnoses, the Kappa coefficient was 0.99 (CI: 0.98-1). Both ROSE and preliminary adequacy assessments as well as preliminary diagnoses, all performed by BMS, were highly accurate when compared with the final assessment by the cytopathologist. © 2016 John Wiley & Sons Ltd.
Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Writers Identification Based on Multiple Windows Features Mining
NASA Astrophysics Data System (ADS)
Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila
2016-03-01
Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
Design of control system for optical fiber drawing machine driven by double motor
NASA Astrophysics Data System (ADS)
Yu, Yue Chen; Bo, Yu Ming; Wang, Jun
2018-01-01
Micro channel Plate (MCP) is a kind of large-area array electron multiplier with high two-dimensional spatial resolution, used as high-performance night vision intensifier. The high precision control of the fiber is the key technology of the micro channel plate manufacturing process, and it was achieved by the control of optical fiber drawing machine driven by dual-motor in this paper. First of all, utilizing STM32 chip, the servo motor drive and control circuit was designed to realize the dual motor synchronization. Secondly, neural network PID control algorithm was designed for controlling the fiber diameter fabricated in high precision; Finally, the hexagonal fiber was manufactured by this system and it shows that multifilament diameter accuracy of the fiber is +/- 1.5μm.
Discrimination in measures of knowledge monitoring accuracy
Was, Christopher A.
2014-01-01
Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979
Quasi-model free control for the post-capture operation of a non-cooperative target
NASA Astrophysics Data System (ADS)
She, Yuchen; Sun, Jun; Li, Shuang; Li, Wendan; Song, Ting
2018-06-01
This paper investigates a quasi-model free control (QMFC) approach for the post-capture control of a non-cooperative space object. The innovation of this paper lies in the following three aspects, which correspond to the three challenges presented in the mission scenario. First, an excitation-response mapping search strategy is developed based on the linearization of the system in terms of a set of parameters, which is efficient in handling the combined spacecraft with a high coupling effect on the inertia matrix. Second, a virtual coordinate system is proposed to efficiently compute the center of mass (COM) of the combined system, which improves the COM tracking efficiency for time-varying COM positions. Third, a linear online corrector is built to reduce the control error to further improve the control accuracy, which helps control the tracking mode within the combined system's time-varying inertia matrix. Finally, simulation analyses show that the proposed control framework is able to realize combined spacecraft post-capture control in extremely unfavorable conditions with high control accuracy.
Village Building Identification Based on Ensemble Convolutional Neural Networks
Guo, Zhiling; Chen, Qi; Xu, Yongwei; Shibasaki, Ryosuke; Shao, Xiaowei
2017-01-01
In this study, we present the Ensemble Convolutional Neural Network (ECNN), an elaborate CNN frame formulated based on ensembling state-of-the-art CNN models, to identify village buildings from open high-resolution remote sensing (HRRS) images. First, to optimize and mine the capability of CNN for village mapping and to ensure compatibility with our classification targets, a few state-of-the-art models were carefully optimized and enhanced based on a series of rigorous analyses and evaluations. Second, rather than directly implementing building identification by using these models, we exploited most of their advantages by ensembling their feature extractor parts into a stronger model called ECNN based on the multiscale feature learning method. Finally, the generated ECNN was applied to a pixel-level classification frame to implement object identification. The proposed method can serve as a viable tool for village building identification with high accuracy and efficiency. The experimental results obtained from the test area in Savannakhet province, Laos, prove that the proposed ECNN model significantly outperforms existing methods, improving overall accuracy from 96.64% to 99.26%, and kappa from 0.57 to 0.86. PMID:29084154
NASA Astrophysics Data System (ADS)
Cogliati, M.; Tonelli, E.; Battaglia, D.; Scaioni, M.
2017-12-01
Archive aerial photos represent a valuable heritage to provide information about land content and topography in the past years. Today, the availability of low-cost and open-source solutions for photogrammetric processing of close-range and drone images offers the chance to provide outputs such as DEM's and orthoimages in easy way. This paper is aimed at demonstrating somehow and to which level of accuracy digitized archive aerial photos may be used within a such kind of low-cost software (Agisoft Photoscan Professional®) to generate photogrammetric outputs. Different steps of the photogrammetric processing workflow are presented and discussed. The main conclusion is that this procedure may come to provide some final products, which however do not feature the high accuracy and resolution that may be obtained using high-end photogrammetric software packages specifically designed for aerial survey projects. In the last part a case study is presented about the use of four-epoch archive of aerial images to analyze the area where a tunnel has to be excavated.
An Improved Image Matching Method Based on Surf Algorithm
NASA Astrophysics Data System (ADS)
Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.
2018-04-01
Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.
Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.
Leveraging transcript quantification for fast computation of alternative splicing profiles.
Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo
2015-09-01
Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
NASA Astrophysics Data System (ADS)
Grenzdörffer, G. J.; Naumann, M.
2016-06-01
UAS become a very valuable tool for coastal morphology. Not only for mapping but also for change detection and a better understanding of processes along and across the shore. This contribution investigates the possibilities of UAS to determine the water depth in clear shallow waters by means of the so called "photo bathymetry". From the results of several test flights it became clear that three factors influence the ability and the accuracy of bathymetric sea floor measurements. Firstly, weather conditions. Sunny weather is not always good. Due to the high image resolution the sunlight gets focussed even in very small waves causing moving patterns on shallow grounds with high reflection properties, such as sand. This effect invisible under overcast weather conditions. Waves, may also introduce problems and mismatches. Secondly the quality and the accuracy of the georeferencing with SFM algorithms. As multi image key point matching will not work over water, the proposed approach will only work for projects closely to the coastline with enough control on the land. Thirdly the software used and the intensity of post processing and filtering. Refraction correction and the final interpolation of the point cloud into a DTM are the last steps. If everything is done appropriately, accuracies in the bathymetry in the range of 10 - 50 cm, depending on the water depth are possible.
Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L
2013-01-01
In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.
Realization and performance of cryogenic selection mechanisms
NASA Astrophysics Data System (ADS)
Aitink-Kroes, Gabby; Bettonvil, Felix; Kragt, Jan; Elswijk, Eddy; Tromp, Niels
2014-07-01
Within Infra-Red large wavelength bandwidth instruments the use of mechanisms for selection of observation modes, filters, dispersing elements, pinholes or slits is inevitable. The cryogenic operating environment poses several challenges to these cryogenic mechanisms; like differential thermal shrinkage, physical property change of materials, limited use of lubrication, high feature density, limited space etc. MATISSE the mid-infrared interferometric spectrograph and imager for ESO's VLT interferometer (VLTI) at Paranal in Chile coherently combines the light from 4 telescopes. Within the Cold Optics Bench (COB) of MATISSE two concepts of selection mechanisms can be distinguished based on the same design principles: linear selection mechanisms (sliders) and rotating selection mechanisms (wheels).Both sliders and wheels are used at a temperature of 38 Kelvin. The selection mechanisms have to provide high accuracy and repeatability. The sliders/wheels have integrated tracks that run on small, accurately located, spring loaded precision bearings. Special indents are used for selection of the slider/wheel position. For maximum accuracy/repeatability the guiding/selection system is separated from the actuation in this case a cryogenic actuator inside the cryostat. The paper discusses the detailed design of the mechanisms and the final realization for the MATISSE COB. Limited lifetime and performance tests determine accuracy, warm and cold and the reliability/wear during life of the instrument. The test results and further improvements to the mechanisms are discussed.
High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data
NASA Technical Reports Server (NTRS)
Lee, Seung-Kuk; Ryu, Joo-Hyung
2017-01-01
This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environment sand erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 57-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.
A high performance sensor for triaxial cutting force measurement in turning.
Zhao, You; Zhao, Yulong; Liang, Songbo; Zhou, Guanwu
2015-04-03
This paper presents a high performance triaxial cutting force sensor with excellent accuracy, favorable natural frequency and acceptable cross-interference for high speed turning process. Octagonal ring is selected as sensitive element of the designed sensor, which is drawn inspiration from ring theory. A novel structure of two mutual-perpendicular octagonal rings is proposed and three Wheatstone full bridge circuits are specially organized in order to obtain triaxial cutting force components and restrain cross-interference. Firstly, the newly developed sensor is tested in static calibration; test results indicate that the sensor possesses outstanding accuracy in the range of 0.38%-0.83%. Secondly, impacting modal tests are conducted to identify the natural frequencies of the sensor in triaxial directions (i.e., 1147 Hz, 1122 Hz and 2035 Hz), which implies that the devised sensor can be used for cutting force measurement in a high speed lathe when the spindle speed does not exceed 17,205 rev/min in continuous cutting condition. Finally, an application of the sensor in turning process is operated to show its performance for real-time cutting force measurement; the measured cutting forces demonstrate a good accordance with the variation of cutting parameters. Thus, the developed sensor possesses perfect properties and it gains great potential for real-time cutting force measurement in turning.
A High Performance Sensor for Triaxial Cutting Force Measurement in Turning
Zhao, You; Zhao, Yulong; Liang, Songbo; Zhou, Guanwu
2015-01-01
This paper presents a high performance triaxial cutting force sensor with excellent accuracy, favorable natural frequency and acceptable cross-interference for high speed turning process. Octagonal ring is selected as sensitive element of the designed sensor, which is drawn inspiration from ring theory. A novel structure of two mutual-perpendicular octagonal rings is proposed and three Wheatstone full bridge circuits are specially organized in order to obtain triaxial cutting force components and restrain cross-interference. Firstly, the newly developed sensor is tested in static calibration; test results indicate that the sensor possesses outstanding accuracy in the range of 0.38%–0.83%. Secondly, impacting modal tests are conducted to identify the natural frequencies of the sensor in triaxial directions (i.e., 1147 Hz, 1122 Hz and 2035 Hz), which implies that the devised sensor can be used for cutting force measurement in a high speed lathe when the spindle speed does not exceed 17,205 rev/min in continuous cutting condition. Finally, an application of the sensor in turning process is operated to show its performance for real-time cutting force measurement; the measured cutting forces demonstrate a good accordance with the variation of cutting parameters. Thus, the developed sensor possesses perfect properties and it gains great potential for real-time cutting force measurement in turning. PMID:25855035
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Asynchronous BCI control using high-frequency SSVEP.
Diez, Pablo F; Mut, Vicente A; Avila Perona, Enrique M; Laciar Leber, Eric
2011-07-14
Steady-State Visual Evoked Potential (SSVEP) is a visual cortical response evoked by repetitive stimuli with a light source flickering at frequencies above 4 Hz and could be classified into three ranges: low (up to 12 Hz), medium (12-30) and high frequency (> 30 Hz). SSVEP-based Brain-Computer Interfaces (BCI) are principally focused on the low and medium range of frequencies whereas there are only a few projects in the high-frequency range. However, they only evaluate the performance of different methods to extract SSVEP. This research proposed a high-frequency SSVEP-based asynchronous BCI in order to control the navigation of a mobile object on the screen through a scenario and to reach its final destination. This could help impaired people to navigate a robotic wheelchair. There were three different scenarios with different difficulty levels (easy, medium and difficult). The signal processing method is based on Fourier transform and three EEG measurement channels. The research obtained accuracies ranging in classification from 65% to 100% with Information Transfer Rate varying from 9.4 to 45 bits/min. Our proposed method allows all subjects participating in the study to control the mobile object and to reach a final target without prior training.
High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.
Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang
2018-04-01
An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
High-speed cell recognition algorithm for ultrafast flow cytometer imaging system
NASA Astrophysics Data System (ADS)
Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang
2018-04-01
An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.
Prediction of global ionospheric VTEC maps using an adaptive autoregressive model
NASA Astrophysics Data System (ADS)
Wang, Cheng; Xin, Shaoming; Liu, Xiaolu; Shi, Chuang; Fan, Lei
2018-02-01
In this contribution, an adaptive autoregressive model is proposed and developed to predict global ionospheric vertical total electron content maps (VTEC). Specifically, the spherical harmonic (SH) coefficients are predicted based on the autoregressive model, and the order of the autoregressive model is determined adaptively using the F-test method. To test our method, final CODE and IGS global ionospheric map (GIM) products, as well as altimeter TEC data during low and mid-to-high solar activity period collected by JASON, are used to evaluate the precision of our forecasting products. Results indicate that the predicted products derived from the model proposed in this paper have good consistency with the final GIMs in low solar activity, where the annual mean of the root-mean-square value is approximately 1.5 TECU. However, the performance of predicted vertical TEC in periods of mid-to-high solar activity has less accuracy than that during low solar activity periods, especially in the equatorial ionization anomaly region and the Southern Hemisphere. Additionally, in comparison with forecasting products, the final IGS GIMs have the best consistency with altimeter TEC data. Future work is needed to investigate the performance of forecasting products using the proposed method in an operational environment, rather than using the SH coefficients from the final CODE products, to understand the real-time applicability of the method.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
ERIC Educational Resources Information Center
Tonkyn, Alan Paul
2012-01-01
This paper reports a case study of the nature and extent of progress in speaking skills made by a group of upper intermediate instructed learners, and also assessors' perceptions of that progress. Initial and final interview data were analysed using several measures of Grammatical and Lexical Complexity, Language Accuracy and Fluency. These…
Laser light-section sensor automating the production of textile-reinforced composites
NASA Astrophysics Data System (ADS)
Schmitt, R.; Niggemann, C.; Mersmann, C.
2009-05-01
Due to their advanced weight-specific mechanical properties, the application of fibre-reinforced plastics (FRP) has been established as a key technology in several engineering areas. Textile-based reinforcement structures (Preform) in particular achieve a high structural integrity due to the multi-dimensional build-up of dry-fibre layers combined with 3D-sewing and further textile processes. The final composite parts provide enhanced damage tolerances through excellent crash-energy absorbing characteristics. For these reasons, structural parts (e.g. frame) will be integrated in next generation airplanes. However, many manufacturing processes for FRP are still involving manual production steps without integrated quality control. The non-automated production implies considerable process dispersion and a high rework rate. Before the final inspection there is no reliable information about the production status. This work sets metrology as the key to automation and thus an economically feasible production, applying a laser light-section sensor system (LLSS) to measure process quality and feed back the results to close control loops of the production system. The developed method derives 3D-measurements from height profiles acquired by the LLSS. To assure the textile's quality a full surface scan is conducted, detecting defects or misalignment by comparing the measurement results with a CAD model of the lay-up. The method focuses on signal processing of the height profiles to ensure a sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a set of sigmoid functions. To compare the measured surface points to the CAD model, material characteristics are incorporated into the method. This ensures that only the fibre layer of the textile's surface is included and gaps between the fibres or overlaying seams are neglected. Finally, determining the uncertainty in measurement according to the GUM-standard proofed the sensor system's accuracy. First tests under industrial conditions showed that applying this sensor after the drapery of each textile layer reduces the scrap quota by approximately 30%.
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
High-accuracy calculations of the rotation-vibration spectrum of {{\\rm{H}}}_{3}^{+}
NASA Astrophysics Data System (ADS)
Tennyson, Jonathan; Polyansky, Oleg L.; Zobov, Nikolai F.; Alijah, Alexander; Császár, Attila G.
2017-12-01
Calculation of the rotation-vibration spectrum of {{{H}}}3+, as well as of its deuterated isotopologues, with near-spectroscopic accuracy requires the development of sophisticated theoretical models, methods, and codes. The present paper reviews the state-of-the-art in these fields. Computation of rovibrational states on a given potential energy surface (PES) has now become standard for triatomic molecules, at least up to intermediate energies, due to developments achieved by the present authors and others. However, highly accurate Born-Oppenheimer energies leading to highly accurate PESs are not accessible even for this two-electron system using conventional electronic structure procedures (e.g. configuration-interaction or coupled-cluster techniques with extrapolation to the complete (atom-centered Gaussian) basis set limit). For this purpose, highly specialized techniques must be used, e.g. those employing explicitly correlated Gaussians and nonlinear parameter optimizations. It has also become evident that a very dense grid of ab initio points is required to obtain reliable representations of the computed points extending from the minimum to the asymptotic limits. Furthermore, adiabatic, relativistic, and quantum electrodynamic correction terms need to be considered to achieve near-spectroscopic accuracy during calculation of the rotation-vibration spectrum of {{{H}}}3+. The remaining and most intractable problem is then the treatment of the effects of non-adiabatic coupling on the rovibrational energies, which, in the worst cases, may lead to corrections on the order of several cm-1. A promising way of handling this difficulty is the further development of effective, motion- or even coordinate-dependent, masses and mass surfaces. Finally, the unresolved challenge of how to describe and elucidate the experimental pre-dissociation spectra of {{{H}}}3+ and its isotopologues is discussed.
Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C
2012-01-27
In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed for systematic method development in high-temperature liquid chromatography (HT-HPLC). The ability to predict isothermal retention times based on temperature-gradient as well as isothermal input data was investigated. For a small temperature interval of ΔT=40°C, both approaches result in very similar predictions. Average relative errors of predicted retention times of 2.7% and 1.9% were observed for simulations based on isothermal and temperature-gradient measurements, respectively. Concurrently, it was investigated whether the accuracy of retention time predictions of segmented temperature gradients can be further improved by temperature dependent calculation of the parameter S(T) of the LES relationship. It was found that the accuracy of retention time predictions of multi-step temperature gradients can be improved to around 1.5%, if S(T) was also calculated temperature dependent. The adjusted experimental design making use of four temperature-gradient measurements was applied for systematic method development of selected food additives by high-temperature liquid chromatography. Method development was performed within a temperature interval from 40°C to 180°C using water as mobile phase. Two separation methods were established where selected food additives were baseline separated. In addition, a good agreement between simulation and experiment was observed, because an average relative error of predicted retention times of complex segmented temperature gradients less than 5% was observed. Finally, a schedule of recommendations to assist the practitioner during systematic method development in high-temperature liquid chromatography was established. Copyright © 2011 Elsevier B.V. All rights reserved.
Diagnostic validity of methods for assessment of swallowing sounds: a systematic review.
Taveira, Karinna Veríssimo Meira; Santos, Rosane Sampaio; Leão, Bianca Lopes Cavalcante de; Neto, José Stechman; Pernambuco, Leandro; Silva, Letícia Korb da; De Luca Canto, Graziela; Porporatti, André Luís
2018-02-03
Oropharyngeal dysphagia is a highly prevalent comorbidity in neurological patients and presents a serious health threat, which may lead to outcomes of aspiration pneumonia, ranging from hospitalization to death. This assessment proposes a non-invasive, acoustic-based method to differentiate between individuals with and without signals of penetration and aspiration. This systematic review evaluated the diagnostic validity of different methods for assessment of swallowing sounds, when compared to Videofluroscopic of Swallowing Study (VFSS) to detect oropharyngeal dysphagia. Articles in which the primary objective was to evaluate the accuracy of swallowing sounds were searched in five electronic databases with no language or time limitations. Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v. 5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. The final electronic search revealed 554 records, however only 3 studies met the inclusion criteria. The accuracy values (area under the curve) were 0.94 for microphone, 0.80 for Doppler, and 0.60 for stethoscope. Based on limited evidence and low methodological quality because few studies were included, with a small sample size, from all index testes found for this systematic review, Doppler showed excellent diagnostic accuracy for the discrimination of swallowing sounds, whereas microphone-reported good accuracy discrimination of swallowing sounds of dysphagic patients and stethoscope showed best screening test. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Yang, Zhongyi; Cheng, Jingyi; Pan, Lingling; Hu, Silong; Xu, Junyan; Zhang, Yongping; Wang, Mingwei; Zhang, Jianping; Ye, Dingwei; Zhang, Yingjian
2012-08-01
Because of the urinary excretion of fluorine-18 fluorodeoxyglucose ((18)F-FDG), FDG-PET or PET/CT is thought of little value in patients with bladder cancer. The purpose of our study was to investigate the value of (18)F-FDG PET/CT with additional pelvic images in detection of recurrent bladder cancers. From December 2006 to August 2010, 35 bladder cancer patients (median age 56 years old, ranging from 35 to 96) underwent routine (18)F-FDG PET/CT. To better detect bladder lesions, a new method called as oral hydration-voiding-refilling was introduced, which included that all the patients firstly received oral hydration, then were required to void frequently and finally were demanded to hold back urine when the additional pelvic images were scanned. Lesions were confirmed by either histopathology or clinical follow-up for at least 6 months. Finally, 12 recurrent cases of 35 patients were confirmed by cystoscope. PET/CT correctly detected 11 of them. Among these 11 true positive patients, 5 patients (45.5 %) were detected only after additional pelvic images. Lichenoid lesions on the bladder wall were missed, which caused 1 false negative result. All three false positive cases were testified to be inflammatory tissues by cystoscope. Therefore, the sensitivity, specificity and accuracy of PET/CT were 91.7 % (11/12), 87.0 % (20/23) and 88.6 % (31/35), respectively. PET/CT with additional pelvic images can highly detect recurrent lesions in residual bladder tissues. Our method with high accuracy and better endurance could be potentially applied.
High-quality seamless DEM generation blending SRTM-1, ASTER GDEM v2 and ICESat/GLAS observations
NASA Astrophysics Data System (ADS)
Yue, Linwei; Shen, Huanfeng; Zhang, Liangpei; Zheng, Xianwei; Zhang, Fan; Yuan, Qiangqiang
2017-01-01
The absence of a high-quality seamless global digital elevation model (DEM) dataset has been a challenge for the Earth-related research fields. Recently, the 1-arc-second Shuttle Radar Topography Mission (SRTM-1) data have been released globally, covering over 80% of the Earth's land surface (60°N-56°S). However, voids and anomalies still exist in some tiles, which has prevented the SRTM-1 dataset from being directly used without further processing. In this paper, we propose a method to generate a seamless DEM dataset blending SRTM-1, ASTER GDEM v2, and ICESat laser altimetry data. The ASTER GDEM v2 data are used as the elevation source for the SRTM void filling. To get a reliable filling source, ICESat GLAS points are incorporated to enhance the accuracy of the ASTER data within the void regions, using an artificial neural network (ANN) model. After correction, the voids in the SRTM-1 data are filled with the corrected ASTER GDEM values. The triangular irregular network based delta surface fill (DSF) method is then employed to eliminate the vertical bias between them. Finally, an adaptive outlier filter is applied to all the data tiles. The final result is a seamless global DEM dataset. ICESat points collected from 2003 to 2009 were used to validate the effectiveness of the proposed method, and to assess the vertical accuracy of the global DEM products in China. Furthermore, channel networks in the Yangtze River Basin were also extracted for the data assessment.
NASA Astrophysics Data System (ADS)
Gonulalan, Cansu
In recent years, there has been an increasing demand for applications to monitor the targets related to land-use, using remote sensing images. Advances in remote sensing satellites give rise to the research in this area. Many applications ranging from urban growth planning to homeland security have already used the algorithms for automated object recognition from remote sensing imagery. However, they have still problems such as low accuracy on detection of targets, specific algorithms for a specific area etc. In this thesis, we focus on an automatic approach to classify and detect building foot-prints, road networks and vegetation areas. The automatic interpretation of visual data is a comprehensive task in computer vision field. The machine learning approaches improve the capability of classification in an intelligent way. We propose a method, which has high accuracy on detection and classification. The multi class classification is developed for detecting multiple objects. We present an AdaBoost-based approach along with the supervised learning algorithm. The combi- nation of AdaBoost with "Attentional Cascade" is adopted from Viola and Jones [1]. This combination decreases the computation time and gives opportunity to real time applications. For the feature extraction step, our contribution is to combine Haar-like features that include corner, rectangle and Gabor. Among all features, AdaBoost selects only critical features and generates in extremely efficient cascade structured classifier. Finally, we present and evaluate our experimental results. The overall system is tested and high performance of detection is achieved. The precision rate of the final multi-class classifier is over 98%.
ArcticDEM Validation and Accuracy Assessment
NASA Astrophysics Data System (ADS)
Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.
2017-12-01
ArcticDEM comprises a growing inventory Digital Elevation Models (DEMs) covering all land above 60°N. As of August, 2017, ArcticDEM had openly released 2-m resolution, individual DEM covering over 51 million km2, which includes areas of repeat coverage for change detection, as well as over 15 million km2 of 5-m resolution seamless mosaics. By the end of the project, over 80 million km2 of 2-m DEMs will be produced, averaging four repeats of the 20 million km2 Arctic landmass. ArcticDEM is produced from sub-meter resolution, stereoscopic imagery using open source software (SETSM) on the NCSA Blue Waters supercomputer. These DEMs have known biases of several meters due to errors in the sensor models generated from satellite positioning. These systematic errors are removed through three-dimensional registration to high-precision Lidar or other control datasets. ArcticDEM is registered to seasonally-subsetted ICESat elevations due its global coverage and high report accuracy ( 10 cm). The vertical accuracy of ArcticDEM is then obtained from the statistics of the fit to the ICESat point cloud, which averages -0.01 m ± 0.07 m. ICESat, however, has a relatively coarse measurement footprint ( 70 m) which may impact the precision of the registration. Further, the ICESat data predates the ArcticDEM imagery by a decade, so that temporal changes in the surface may also impact the registration. Finally, biases may exist between different the different sensors in the ArcticDEM constellation. Here we assess the accuracy of ArcticDEM and the ICESat registration through comparison to multiple high-resolution airborne lidar datasets that were acquired within one year of the imagery used in ArcticDEM. We find the ICESat dataset is performing as anticipated, introducing no systematic bias during the coregistration process, and reducing vertical errors to within the uncertainty of the airborne Lidars. Preliminary sensor comparisons show no significant difference post coregistration, suggesting that there is no sensor bias between platforms, and all data is suitable for analysis without further correction. Here we will present accuracy assessments, observations and comparisons over diverse terrain in parts of Alaska and Greenland.
Rzouq, Fadi; Vennalaganti, Prashanth; Pakseresht, Kavous; Kanakadandi, Vijay; Parasa, Sravanthi; Mathur, Sharad C; Alsop, Benjamin R; Hornung, Benjamin; Gupta, Neil; Sharma, Prateek
2016-02-01
Optimal teaching methods for disease recognition using probe-based confocal laser endomicroscopy (pCLE) have not been developed. Our aim was to compare in-class didactic teaching vs. self-directed teaching of Barrett's neoplasia diagnosis using pCLE. This randomized controlled trial was conducted at a tertiary academic center. Study participants with no prior pCLE experience were randomized to in-class didactic (group 1) or self-directed teaching groups (group 2). For group 1, an expert conducted a classroom teaching session using standardized educational material. Participants in group 2 were provided with the same material on an audio PowerPoint. After initial training, all participants graded an initial set of 20 pCLE videos and reviewed correct responses with the expert (group 1) or on audio PowerPoint (group 2). Finally, all participants completed interpretations of a further 40 videos. Eighteen trainees (8 medical students, 10 gastroenterology trainees) participated in the study. Overall diagnostic accuracy for neoplasia prediction by pCLE was 77 % (95 % confidence interval [CI] 74.0 % - 79.2 %); of predictions made with high confidence (53 %), the accuracy was 85 % (95 %CI 81.8 % - 87.8 %). The overall accuracy and interobserver agreement was significantly higher in group 1 than in group 2 for all predictions (80.4 % vs. 73 %; P = 0.005) and for high confidence predictions (90 % vs. 80 %; P < 0.001). Following feedback (after the initial 20 videos), the overall accuracy improved from 73 % to 79 % (P = 0.04), mainly driven by a significant improvement in group 1 (74 % to 84 %; P < 0.01). Accuracy of prediction significantly improved with time in endoscopy training (72 % students, 77 % FY1, 82 % FY2, and 85 % FY3; P = 0.003). For novice trainees, in-class didactic teaching enables significantly better recognition of the pCLE features of Barrett's esophagus than self-directed teaching. The in-class didactic group had a shorter learning curve and were able to achieve 90 % accuracy for their high confidence predictions. © Georg Thieme Verlag KG Stuttgart · New York.
Balouch, F; Jalalian, E; Nikkheslat, M; Ghavamian, R; Toopchi, Sh; Jallalian, F; Jalalian, S
2013-01-01
Statement of Problem: Various impression techniques have different effects on the accuracy of final cast dimensions. Meanwhile; there are some controversies about the best technique. Purpose: This study was performed to compare two kinds of implant impression methods (open tray and closed tray) on 15 degree angled implants. Materials and Method: In this experimental study, a steel model with 8 cm in diameter and 3 cm in height were produced with 3 holes devised inside to stabilize 3 implants. The central implant was straight and the other two implants were 15° angled. The two angled implants had 5 cm distance from each other and 3.5 cm from the central implant. Dental stone, high strength (type IV) was used for the main casts. Impression trays were filled with poly ether, and then the two impression techniques (open tray and closed tray) were compared. To evaluate positions of the implants, each cast was analyzed by CMM device in 3 dimensions (x,y,z). Differences in the measurements obtained from final casts and laboratory model were analyzed using t-Test. Results: The obtained results indicated that closed tray impression technique was significantly different in dimensional accuracy when compared with open tray method. Dimensional changes were 129 ± 37μ and 143.5 ± 43.67μ in closed tray and open tray, while coefficient of variation in closed- tray and open tray were reported to be 27.2% and 30.4%, respectively. Conclusion: Closed impression technique had less dimensional changes in comparison with open tray method, so this study suggests that closed tray impression technique is more accurate. PMID:24724130
NASA Astrophysics Data System (ADS)
Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk
2016-07-01
Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.
Playing Chemical Plant Environmental Protection Games with Historical Monitoring Data.
Zhu, Zhengqiu; Chen, Bin; Reniers, Genserik; Zhang, Laobing; Qiu, Sihang; Qiu, Xiaogang
2017-09-29
The chemical industry is very important for the world economy and this industrial sector represents a substantial income source for developing countries. However, existing regulations on controlling atmospheric pollutants, and the enforcement of these regulations, often are insufficient in such countries. As a result, the deterioration of surrounding ecosystems and a quality decrease of the atmospheric environment can be observed. Previous works in this domain fail to generate executable and pragmatic solutions for inspection agencies due to practical challenges. In addressing these challenges, we introduce a so-called Chemical Plant Environment Protection Game (CPEP) to generate reasonable schedules of high-accuracy air quality monitoring stations (i.e., daily management plans) for inspection agencies. First, so-called Stackelberg Security Games (SSGs) in conjunction with source estimation methods are applied into this research. Second, high-accuracy air quality monitoring stations as well as gas sensor modules are modeled in the CPEP game. Third, simplified data analysis on the regularly discharging of chemical plants is utilized to construct the CPEP game. Finally, an illustrative case study is used to investigate the effectiveness of the CPEP game, and a realistic case study is conducted to illustrate how the models and algorithms being proposed in this paper, work in daily practice. Results show that playing a CPEP game can reduce operational costs of high-accuracy air quality monitoring stations. Moreover, evidence suggests that playing the game leads to more compliance from the chemical plants towards the inspection agencies. Therefore, the CPEP game is able to assist the environmental protection authorities in daily management work and reduce the potential risks of gaseous pollutants dispersion incidents.
Fifer, Matthew S.; Johannes, Matthew S.; Katyal, Kapil D.; Para, Matthew P.; Armiger, Robert; Anderson, William S.; Thakor, Nitish V.; Wester, Brock A.; Crone, Nathan E.
2016-01-01
Objective We used native sensorimotor representations of fingers in a brain-machine interface to achieve immediate online control of individual prosthetic fingers. Approach Using high gamma responses recorded with a high-density ECoG array, we rapidly mapped the functional anatomy of cued finger movements. We used these cortical maps to select ECoG electrodes for a hierarchical linear discriminant analysis classification scheme to predict: 1) if any finger was moving, and, if so, 2) which digit was moving. To account for sensory feedback, we also mapped the spatiotemporal activation elicited by vibrotactile stimulation. Finally, we used this prediction framework to provide immediate online control over individual fingers of the Johns Hopkins University Applied Physics Laboratory (JHU/APL) Modular Prosthetic Limb (MPL). Main Results The balanced classification accuracy for detection of movements during the online control session was 92% (chance: 50%). At the onset of movement, finger classification was 76% (chance: 20%), and 88% (chance: 25%) if the pinky and ring finger movements were coupled. Balanced accuracy of fully flexing the cued finger was 64%, and 77% had we combined pinky and ring commands. Offline decoding yielded a peak finger decoding accuracy of 96.5% (chance: 20%) when using an optimized selection of electrodes. Offline analysis demonstrated significant finger-specific activations throughout sensorimotor cortex. Activations either prior to movement onset or during sensory feedback led to discriminable finger control. Significance Our results demonstrate the ability of ECoG-based BMIs to leverage the native functional anatomy of sensorimotor cortical populations to immediately control individual finger movements in real time. PMID:26863276
NASA Astrophysics Data System (ADS)
Hotson, Guy; McMullen, David P.; Fifer, Matthew S.; Johannes, Matthew S.; Katyal, Kapil D.; Para, Matthew P.; Armiger, Robert; Anderson, William S.; Thakor, Nitish V.; Wester, Brock A.; Crone, Nathan E.
2016-04-01
Objective. We used native sensorimotor representations of fingers in a brain-machine interface (BMI) to achieve immediate online control of individual prosthetic fingers. Approach. Using high gamma responses recorded with a high-density electrocorticography (ECoG) array, we rapidly mapped the functional anatomy of cued finger movements. We used these cortical maps to select ECoG electrodes for a hierarchical linear discriminant analysis classification scheme to predict: (1) if any finger was moving, and, if so, (2) which digit was moving. To account for sensory feedback, we also mapped the spatiotemporal activation elicited by vibrotactile stimulation. Finally, we used this prediction framework to provide immediate online control over individual fingers of the Johns Hopkins University Applied Physics Laboratory modular prosthetic limb. Main results. The balanced classification accuracy for detection of movements during the online control session was 92% (chance: 50%). At the onset of movement, finger classification was 76% (chance: 20%), and 88% (chance: 25%) if the pinky and ring finger movements were coupled. Balanced accuracy of fully flexing the cued finger was 64%, and 77% had we combined pinky and ring commands. Offline decoding yielded a peak finger decoding accuracy of 96.5% (chance: 20%) when using an optimized selection of electrodes. Offline analysis demonstrated significant finger-specific activations throughout sensorimotor cortex. Activations either prior to movement onset or during sensory feedback led to discriminable finger control. Significance. Our results demonstrate the ability of ECoG-based BMIs to leverage the native functional anatomy of sensorimotor cortical populations to immediately control individual finger movements in real time.
NASA Astrophysics Data System (ADS)
Anastasopoulos, Dimitrios; Moretti, Patrizia; Geernaert, Thomas; De Pauw, Ben; Nawrot, Urszula; De Roeck, Guido; Berghmans, Francis; Reynders, Edwin
2017-03-01
The presence of damage in a civil structure alters its stiffness and consequently its modal characteristics. The identification of these changes can provide engineers with useful information about the condition of a structure and constitutes the basic principle of the vibration-based structural health monitoring. While eigenfrequencies and mode shapes are the most commonly monitored modal characteristics, their sensitivity to structural damage may be low relative to their sensitivity to environmental influences. Modal strains or curvatures could offer an attractive alternative but current measurement techniques encounter difficulties in capturing the very small strain (sub-microstrain) levels occurring during ambient, or operational excitation, with sufficient accuracy. This paper investigates the ability to obtain sub-microstrain accuracy with standard fiber-optic Bragg gratings using a novel optical signal processing algorithm that identifies the wavelength shift with high accuracy and precision. The novel technique is validated in an extensive experimental modal analysis test on a steel I-beam which is instrumented with FBG sensors at its top and bottom flange. The raw wavelength FBG data are processed into strain values using both a novel correlation-based processing technique and a conventional peak tracking technique. Subsequently, the strain time series are used for identifying the beam's modal characteristics. Finally, the accuracy of both algorithms in identification of modal characteristics is extensively investigated.
Accuracy of the HST Standard Astrometric Catalogs w.r.t. Gaia
NASA Astrophysics Data System (ADS)
Kozhurina-Platais, V.; Grogin, N.; Sabbi, E.
2018-02-01
The goal of astrometric calibration of the HST ACS/WFC and WFC3/UVIS imaging instruments is to provide a coordinate system free of distortion to the precision level of 0.1 pixel 4-5 mas or better. This astrometric calibration is based on two HST astrometric standard fields in the vicinity of the globular clusters, 47 Tuc and omega Cen, respectively. The derived calibration of the geometric distortion is assumed to be accurate down to 2-3 mas. Is this accuracy in agreement with the true value? Now, with the access to globally accurate positions from the first Gaia data release (DR1), we found that there are measurable offsets, rotation, scale and other deviations of distortion parameters in two HST standard astrometric catalogs. These deviations from the distortion-free and properly aligned coordinate system should be accounted and corrected for, so that the high precision HST positions are free of any systematic errors. We also found that the precision of the HST pixel coordinates is substantially better than the accuracy listed in the Gaia DR1. Therefore, in order to finalize the components of distortion in the HST standard catalogs, the next release of Gaia data is needed.
NASA Astrophysics Data System (ADS)
Islam, Nurul Kamariah Md Saiful; Harun, Wan Sharuzi Wan; Ghani, Saiful Anwar Che; Omar, Mohd Asnawi; Ramli, Mohd Hazlen; Ismail, Muhammad Hussain
2017-12-01
Selective Laser Melting (SLM) demonstrates the 21st century's manufacturing infrastructure in which powdered raw material is melted by a high energy focused laser, and built up layer-by-layer until it forms three-dimensional metal parts. SLM process involves a variation of process parameters which affects the final material properties. 316L stainless steel compacts through the manipulation of building orientation and powder layer thickness parameters were manufactured by SLM. The effect of the manipulated parameters on the relative density and dimensional accuracy of the 316L stainless steel compacts, which were in the as-build condition, were experimented and analysed. The relationship between the microstructures and the physical properties of fabricated 316L stainless steel compacts was investigated in this study. The results revealed that 90° building orientation has higher relative density and dimensional accuracy than 0° building orientation. Building orientation was found to give more significant effect in terms of dimensional accuracy, and relative density of SLM compacts compare to build layer thickness. Nevertheless, the existence of large number and sizes of pores greatly influences the low performances of the density.
The outlook for precipitation measurements from space
NASA Technical Reports Server (NTRS)
Atlas, D.; Eckerman, J.; Meneghini, R.; Moore, R. K.
1981-01-01
To provide useful precipitation measurements from space, two requirements must be met: adequate spatial and temporal sampling of the storm and sufficient accuracy in the estimate of precipitation intensity. Although presently no single instrument or method completely satisfies both requirements, the visible/IR, microwave radiometer and radar methods can be used in a complementary manner. Visible/IR instruments provide good temporal sampling and rain area depiction, but recourse must be made to microwave measurements for quantitative rainfall estimates. The inadequacy of microwave radiometer measurements over land suggests, in turn, the use of radar. Several recently developed attenuating-wavelength radar methods are discussed in terms of their accuracy, dynamic range and system implementation. Traditionally, the requirements of high resolution and adequate dynamic range led to fairly costly and complex radar systems. Some simplications and cost reduction can be made; however, by using K-band wavelengths which have the advantages of greater sensitivity at the low rain rates and higher resolution capabilities. Several recently proposed methods of this kind are reviewed in terms of accuracy and system implementation. Finally, an adaptive-pointing multi-sensor instrument is described that would exploit certain advantages of the IR, radiometric and radar methods.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
Intraoperative consultation of central nervous system lesions. Frozen section, cytology or both?
Sharifabadi, Ali Haidari; Haeri, Hayedeh; Zeinalizadeh, Mehdi; Zargari, Neda; Razavi, Amirnader Emami; Shahbazi, Nargess; Tahvildari, Malahat; Azmoudeh-Ardalan, Farid
2016-03-01
Frozen section is the traditional method of assessing central nervous system (CNS) lesions intraoperatively. Our aim is to determine the diagnostic accuracy of frozen section and/or cytological evaluation of CNS lesions in our center. A total of 157 patients with CNS lesions underwent open surgical biopsy or excision in our center during a period of 2 years (2012-2013). All specimens were studied cytologically; of these specimens, 146 cases were also examined by frozen section. Cytology and frozen section slides were studied separately by two general pathologists who were blind to final diagnoses. The final diagnoses were based on permanent sections and IHC studies. The accuracy rates of frozen section analysis and cytological evaluation were 87% and 86%, respectively. If the two methods were considered together, the accuracy rate improved to about 95%. Cytological evaluation is an acceptable alternative to frozen section analysis and also a great supplement to the diagnosis of CNS lesions. Copyright © 2015 Elsevier GmbH. All rights reserved.
L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.
Hamada, Megumi
2017-10-01
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.
The development of visual speech perception in Mandarin Chinese-speaking children.
Chen, Liang; Lei, Jianghua
2017-01-01
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.
Predicting metabolic syndrome using decision tree and support vector machine methods.
Karimi-Alavijeh, Farzaneh; Jalili, Saeed; Sadeghi, Masoumeh
2016-05-01
Metabolic syndrome which underlies the increased prevalence of cardiovascular disease and Type 2 diabetes is considered as a group of metabolic abnormalities including central obesity, hypertriglyceridemia, glucose intolerance, hypertension, and dyslipidemia. Recently, artificial intelligence based health-care systems are highly regarded because of its success in diagnosis, prediction, and choice of treatment. This study employs machine learning technics for predict the metabolic syndrome. This study aims to employ decision tree and support vector machine (SVM) to predict the 7-year incidence of metabolic syndrome. This research is a practical one in which data from 2107 participants of Isfahan Cohort Study has been utilized. The subjects without metabolic syndrome according to the ATPIII criteria were selected. The features that have been used in this data set include: gender, age, weight, body mass index, waist circumference, waist-to-hip ratio, hip circumference, physical activity, smoking, hypertension, antihypertensive medication use, systolic blood pressure (BP), diastolic BP, fasting blood sugar, 2-hour blood glucose, triglycerides (TGs), total cholesterol, low-density lipoprotein, high density lipoprotein-cholesterol, mean corpuscular volume, and mean corpuscular hemoglobin. Metabolic syndrome was diagnosed based on ATPIII criteria and two methods of decision tree and SVM were selected to predict the metabolic syndrome. The criteria of sensitivity, specificity and accuracy were used for validation. SVM and decision tree methods were examined according to the criteria of sensitivity, specificity and accuracy. Sensitivity, specificity and accuracy were 0.774 (0.758), 0.74 (0.72) and 0.757 (0.739) in SVM (decision tree) method. The results show that SVM method sensitivity, specificity and accuracy is more efficient than decision tree. The results of decision tree method show that the TG is the most important feature in predicting metabolic syndrome. According to this study, in cases where only the final result of the decision is regarded significant, SVM method can be used with acceptable accuracy in decision making medical issues. This method has not been implemented in the previous research.
Song, H; Li, L; Ma, P; Zhang, S; Su, G; Lund, M S; Zhang, Q; Ding, X
2018-06-01
This study investigated the efficiency of genomic prediction with adding the markers identified by genome-wide association study (GWAS) using a data set of imputed high-density (HD) markers from 54K markers in Chinese Holsteins. Among 3,056 Chinese Holsteins with imputed HD data, 2,401 individuals born before October 1, 2009, were used for GWAS and a reference population for genomic prediction, and the 220 younger cows were used as a validation population. In total, 1,403, 1,536, and 1,383 significant single nucleotide polymorphisms (SNP; false discovery rate at 0.05) associated with conformation final score, mammary system, and feet and legs were identified, respectively. About 2 to 3% genetic variance of 3 traits was explained by these significant SNP. Only a very small proportion of significant SNP identified by GWAS was included in the 54K marker panel. Three new marker sets (54K+) were herein produced by adding significant SNP obtained by linear mixed model for each trait into the 54K marker panel. Genomic breeding values were predicted using a Bayesian variable selection (BVS) model. The accuracies of genomic breeding value by BVS based on the 54K+ data were 2.0 to 5.2% higher than those based on the 54K data. The imputed HD markers yielded 1.4% higher accuracy on average (BVS) than the 54K data. Both the 54K+ and HD data generated lower bias of genomic prediction, and the 54K+ data yielded the lowest bias in all situations. Our results show that the imputed HD data were not very useful for improving the accuracy of genomic prediction and that adding the significant markers derived from the imputed HD marker panel could improve the accuracy of genomic prediction and decrease the bias of genomic prediction. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
"Tech-check-tech": a review of the evidence on its safety and benefits.
Adams, Alex J; Martin, Steven J; Stolpe, Samuel F
2011-10-01
The published evidence on state-authorized programs permitting final verification of medication orders by pharmacy technicians, including the programs' impact on pharmacist work hours and clinical activities, is reviewed. Some form of "tech-check-tech" (TCT)--the checking of a technician's order-filling accuracy by another technician rather than a pharmacist--is authorized for use by pharmacies in at least nine states. The results of 11 studies published since 1978 indicate that technicians' accuracy in performing final dispensing checks is very comparable to pharmacists' accuracy (mean ± S.D., 99.6% ± 0.55% versus 99.3% ± 0.68%, respectively). In 6 of those studies, significant differences in accuracy or error detection rates favoring TCT were reported (p < 0.05), although published TCT studies to date have had important limitations. In states with active or pilot TCT programs, pharmacists surveyed have reported that the practice has yielded time savings (estimates range from 10 hours per month to 1 hour per day), enabling them to spend more time providing clinical services. States permitting TCT programs require technicians to complete special training before assuming TCT duties, which are generally limited to restocking automated dispensing machines and filling unit dose batches of refills in hospitals and other institutional settings. The published evidence demonstrates that pharmacy technicians can perform as accurately as pharmacists, perhaps more accurately, in the final verification of unit dose orders in institutional settings. Current TCT programs have fairly consistent elements, including the limitation of TCT to institutional settings, advanced education and training requirements for pharmacy technicians, and ongoing quality assurance.
NUMERICAL INTEGRAL OF RESISTANCE COEFFICIENTS IN DIFFUSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Q. S., E-mail: zqs@ynao.ac.cn
2017-01-10
The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e.,more » ∼10{sup −12}). The results of collision integrals and their derivatives for −7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10{sup −10}. For very weakly coupled plasma ( ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10{sup −11}. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’; for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.« less
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
NASA Astrophysics Data System (ADS)
Tane, Z.; Ramirez, C.; Roberts, D. A.; Koltunov, A.; Sweeney, S.
2016-12-01
There is considerable scientific and public interest in the ongoing drought and bark beetle driven conifer mortality in the Central and Southern Sierra Nevada, the scale of which has not been seen previously in California's recorded history. Just before and during this mortality event (2013-2016), Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) data were acquired seasonally over part of the affected area as part of the HyspIRI Preparatory Mission. In this study, we used 11 AVIRIS flight lines from 8 seasonal flights (from spring 2013 to summer 2015) to detect conifer mortality. In addition to the standard pre-processing completed by NASA's Jet Propulsion Lab, AVIRIS images were co-registered and georeferenced between time steps and images were resampled to the spatial resolution and signal-to-noise ratio expected from the proposed HyspIRI satellite. We used summer 2015 high-spatial resolution WorldView-2 and WorldView-3 images from across the study area to collect training data from five scenes, and independent validation data from five additional scenes. A cover class map developed with a machine-learning algorithm, separated pixels into green conifer, red-attack conifer, and non-conifer dominant cover, yielding a high accuracy (above 85% accuracy on the independent validation data) in the tree mortality final map. Discussion will include the effects of temporal information and input dimensionality on classification accuracy, comparison with multi-spectral classification accuracy, the ecological and forest management implications of this work, incorporating 2016 AVIRS images to detect 2016 mortality, and future work in understanding the spatial patterns underlying the mortality.
Roads Data Conflation Using Update High Resolution Satellite Images
NASA Astrophysics Data System (ADS)
Abdollahi, A.; Riyahi Bakhtiari, H. R.
2017-11-01
Urbanization, industrialization and modernization are rapidly growing in developing countries. New industrial cities, with all the problems brought on by rapid population growth, need infrastructure to support the growth. This has led to the expansion and development of the road network. A great deal of road network data has made by using traditional methods in the past years. Over time, a large amount of descriptive information has assigned to these map data, but their geometric accuracy and precision is not appropriate to today's need. In this regard, the improvement of the geometric accuracy of road network data by preserving the descriptive data attributed to them and updating of the existing geo databases is necessary. Due to the size and extent of the country, updating the road network maps using traditional methods is time consuming and costly. Conversely, using remote sensing technology and geographic information systems can reduce costs, save time and increase accuracy and speed. With increasing the availability of high resolution satellite imagery and geospatial datasets there is an urgent need to combine geographic information from overlapping sources to retain accurate data, minimize redundancy, and reconcile data conflicts. In this research, an innovative method for a vector-to-imagery conflation by integrating several image-based and vector-based algorithms presented. The SVM method for image classification and Level Set method used to extract the road the different types of road intersections extracted from imagery using morphological operators. For matching the extracted points and to find the corresponding points, matching function which uses the nearest neighborhood method was applied. Finally, after identifying the matching points rubber-sheeting method used to align two datasets. Two residual and RMSE criteria used to evaluate accuracy. The results demonstrated excellent performance. The average root-mean-square error decreased from 11.8 to 4.1 m.
New Gener. High-Energy Spectra of the Blazar 3C 279 with XMM-Newton and GLAST
NASA Astrophysics Data System (ADS)
Collmar, Werner
2007-10-01
We propose two 20 ksec XMM-Newton observations of the X-ray bright gamma-ray blazar 3C~279 simultaneous with GLAST/LAT. The main goal is to measure its X-ray properties (spectrum, variability) in order to (1) improve our knowledge on the X-ray emission of the blazar, and (2) to supplement and correlate them to simultaneous GLAST/LAT Gamma-ray observations (30 MeV-300 GeV). Simultaneous GLAST observations of 3C 279 are guaranteed (assuming proper operation then). The high-energy data will be supplemented by ground-based measurements, adding finally up to multifrequency spectra which have unprecedented accuracy and will extend up to high-energy gamma-rays. Such high-quality SEDs will provide severe constraints on their modeling and have the potential to discriminate among models.
A low noise photoelectric signal acquisition system applying in nuclear magnetic resonance gyroscope
NASA Astrophysics Data System (ADS)
Lu, Qilin; Zhang, Xian; Zhao, Xinghua; Yang, Dan; Zhou, Binquan; Hu, Zhaohui
2017-10-01
The nuclear magnetic resonance gyroscope serves as a new generation of strong support for the development of high-tech weapons, it solves the core problem that limits the development of the long-playing seamless navigation and positioning. In the NMR gyroscope, the output signal with atomic precession frequency is detected by the probe light, the final crucial photoelectric signal of the probe light directly decides the quality of the gyro signal. But the output signal has high sensitivity, resolution and measurement accuracy for the photoelectric detection system. In order to detect the measured signal better, this paper proposed a weak photoelectric signal rapid acquisition system, which has high SNR and the frequency of responded signal is up to 100 KHz to let the weak output signal with high frequency of the NMR gyroscope can be detected better.
Inductively Coupled Plasma Mass Spectrometry (ICP-MS) and its Application in Life Sciences
NASA Astrophysics Data System (ADS)
Xu, Gu-feng; Wang, Hong-mei
2001-08-01
Inductively-coupled plasma mass spectrometry (ICP-MS) has made much progress since its birth in the late 1990s. This paper will give a rather systematic overview on the use of this technique in new devices and technologies related to plasma source, sample-introducing device and detecting spectrometer etc. In this overview, an emphasis will be put on the evaluation of the ICP-MS technique in combination with a series of physical, chemical and biological techniques, such as laser ablation (LA), capillary electrophoresis (CE) and high performance liquid chromatograph (HPLC), along with their representative high accuracy and high sensitivity. Finally, comprehensive and fruitful applications of the ICP-MS and its combinative techniques in the detection of trace metallic elements and isotopes in complex biological and environmental samples will be revealed.
Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P
2018-01-01
Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424
One-way coupling of an atmospheric and a hydrologic model in Colorado
Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.
2006-01-01
This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
A new method to obtain ground control points based on SRTM data
NASA Astrophysics Data System (ADS)
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
NASA Astrophysics Data System (ADS)
Zakaria, Zarabizan; Ismail, Syuhaida; Yusof, Aminah Md
2013-04-01
Federal roads maintenance needs a systematic and effective mechanism to ensure that the roads are in good condition and provide comfort to the road user. In implementing effective maintenance, budget is main the factor limiting this endeavor. Thus Public Works Department (PWD) Malaysia used Highway Development and Management (HDM-4) System to help the management of PWD Malaysia in determining the location and length of the road to be repaired according to the priority based on its analysis. For that purpose, PWD Malaysia has applied Pavement Management System (PMS) which utilizes HDM-4 as the analysis engine to conduct technical and economic analysis in generating annual work programs for pavement maintenance. As a result, a lot of feedback and comment have been received from Supervisory and Roads Maintenance Unit (UPPJ) Zonal on the accuracy of the system output and problems that arise in the closing of final account. Therefore, the objective of this paper is to evaluate current system accuracy in terms of generating the annual work program for periodic pavement maintenance, to identify factors contributing to the system inaccuracy in selecting the location and length of roads that require for treatment and to propose improvement measures for the system accuracy. The factors affecting the closing of final account caused by result received from the pavement management system are also defined. The scope of this paper is on the existing HDM-4 System which cover four states specifically Perlis, Selangor, Kelantan and Johor which is analysed via the work program output data for the purpose of evaluating the system accuracy. The method used in this paper includes case study, interview, discussion and analysis of the HDM-4 System output data. This paper has identified work history not updated and the analysis is not using the current data as factors contributing to the system accuracy. From the result of this paper, it is found that HDM-4's system accuracy used by PWD Malaysia attains average 65 per cent only and had not achieved level that had been set by PWD Malaysia namely 80 per cent. Hence, this paper has revealed the causes of the occurrances in the pavement management system in construction project in Malaysia and investigated the consequences of the late payments and final account problems confronted by contractors in Malaysia, which eventually proposed strategic actions that could be taken by the contractors in securing their payments.
The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo
2017-02-01
The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.
NASA Astrophysics Data System (ADS)
Li, Bang-Jian; Wang, Quan-Bao; Duan, Deng-Ping; Chen, Ji-An
2018-05-01
Intensity saturation can cause decorrelation phenomenon and decrease the measurement accuracy in digital image correlation (DIC). In the paper, the grey intensity adjustment strategy is proposed to improve the measurement accuracy of DIC considering the effect of intensity saturation. First, the grey intensity adjustment strategy is described in detail, which can recover the truncated grey intensities of the saturated pixels and reduce the decorrelation phenomenon. The simulated speckle patterns are then employed to demonstrate the efficacy of the proposed strategy, which indicates that the displacement accuracy can be improved by about 40% by the proposed strategy. Finally, the true experimental image is used to show the feasibility of the proposed strategy, which indicates that the displacement accuracy can be increased by about 10% by the proposed strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitra, Neepa
2016-07-14
This project investigates the accuracy of currently-used functionals in time-dependent density functional theory, which is today routinely used to predict and design materials and computationally model processes in solar energy conversion. The rigorously-based electron-ion dynamics method developed here sheds light on traditional methods and overcomes challenges those methods have. The fundamental research undertaken here is important for building reliable and practical methods for materials discovery. The ultimate goal is to use these tools for the computational design of new materials for solar cell devices of high efficiency.
NASA Technical Reports Server (NTRS)
Mcpherron, R. L.
1977-01-01
Procedures are described for the calibration of a vector magnetometer of high absolute accuracy. It is assumed that the calibration will be performed in the magnetic test facility of Goddard Space Flight Center (GSFC). The first main section of the report describes the test equipment and facility calibrations required. The second presents procedures for calibrating individual sensors. The third discusses the calibration of the sensor assembly. In a final section recommendations are made to GSFC for modification of the test facility required to carry out the calibration procedures.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.; Galindo-Israel, V.; Mittra, R.
1980-01-01
The planar configuration with a probe scanning a polar geometry is discussed with reference to its usefulness in the determination of a far field from near-field measurements. The accuracy of the method is verified numerically, using the concept of probe compensation as a vector deconvolution. Advantages of the Jacobi-Bessel series over the fast Fourier transforms for the plane-polar geometry are demonstrated. Finally, the far-field pattern of the Viking high gain antenna is constructed from the plane-polar near-field measured data and compared with the previously measured far-field pattern.
Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach
NASA Astrophysics Data System (ADS)
Yu, Bing; Shu, Wenjun
2017-03-01
The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.
Implementing an Automated Antenna Measurement System
NASA Technical Reports Server (NTRS)
Valerio, Matthew D.; Romanofsky, Robert R.; VanKeuls, Fred W.
2003-01-01
We developed an automated measurement system using a PC running a LabView application, a Velmex BiSlide X-Y positioner, and a HP85l0C network analyzer. The system provides high positioning accuracy and requires no user supervision. After the user inputs the necessary parameters into the LabView application, LabView controls the motor positioning and performs the data acquisition. Current parameters and measured data are shown on the PC display in two 3-D graphs and updated after every data point is collected. The final output is a formatted data file for later processing.
Research on motion model for the hypersonic boost-glide aircraft
NASA Astrophysics Data System (ADS)
Xu, Shenda; Wu, Jing; Wang, Xueying
2015-11-01
A motion model for the hypersonic boost-glide aircraft(HBG) was proposed in this paper, which also analyzed the precision of model through simulation. Firstly the trajectory of HBG was analyzed, and a scheme which divide the trajectory into two parts then build the motion model on each part. Secondly a restrained model of boosting stage and a restrained model of J2 perturbation were established, and set up the observe model. Finally the analysis of simulation results show the feasible and high-accuracy of the model, and raise a expectation for intensive research.
Solution methods for one-dimensional viscoelastic problems
NASA Technical Reports Server (NTRS)
Stubstad, John M.; Simitses, George J.
1987-01-01
A recently developed differential methodology for solution of one-dimensional nonlinear viscoelastic problems is presented. Using the example of an eccentrically loaded cantilever beam-column, the results from the differential formulation are compared to results generated using a previously published integral solution technique. It is shown that the results obtained from these distinct methodologies exhibit a surprisingly high degree of correlation with one another. A discussion of the various factors affecting the numerical accuracy and rate of convergence of these two procedures is also included. Finally, the influences of some 'higher order' effects, such as straining along the centroidal axis are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nierman, William C.
At TIGR, the human Bacterial Artificial Chromosome (BAC) end sequencing and trimming were with an overall sequencing success rate of 65%. CalTech human BAC libraries A, B, C and D as well as Roswell Park Cancer Institute's library RPCI-11 were used. To date, we have generated >300,000 end sequences from >186,000 human BAC clones with an average read length {approx}460 bp for a total of 141 Mb covering {approx}4.7% of the genome. Over sixty percent of the clones have BAC end sequences (BESs) from both ends representing over five-fold coverage of the genome by the paired-end clones. The average phredmore » Q20 length is {approx}400 bp. This high accuracy makes our BESs match the human finished sequences with an average identity of 99% and a match length of 450 bp, and a frequency of one match per 12.8 kb contig sequence. Our sample tracking has ensured a clone tracking accuracy of >90%, which gives researchers a high confidence in (1) retrieving the right clone from the BA C libraries based on the sequence matches; and (2) building a minimum tiling path of sequence-ready clones across the genome and genome assembly scaffolds.« less
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C
2018-01-01
An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines three selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Method validation resulted in a linearity range of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a reliable software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. PMID:28415015
Triebl, Alexander; Trötzmüller, Martin; Hartler, Jürgen; Stojakovic, Tatjana; Köfeler, Harald C
2017-05-15
An improved approach for selective and sensitive identification and quantitation of lipid molecular species using reversed phase chromatography coupled to high resolution mass spectrometry was developed. The method is applicable to a wide variety of biological matrices using a simple liquid-liquid extraction procedure. Together, this approach combines multiple selectivity criteria: Reversed phase chromatography separates lipids according to their acyl chain length and degree of unsaturation and is capable of resolving positional isomers of lysophospholipids, as well as structural isomers of diacyl phospholipids and glycerolipids. Orbitrap mass spectrometry delivers the elemental composition of both positive and negative ions with high mass accuracy. Finally, automatically generated tandem mass spectra provide structural insight into numerous glycerolipids, phospholipids, and sphingolipids within a single run. Calibration showed linearity ranges of more than four orders of magnitude, good values for accuracy and precision at biologically relevant concentration levels, and limits of quantitation of a few femtomoles on column. Hundreds of lipid molecular species were detected and quantified in three different biological matrices, which cover well the wide variety and complexity of various model organisms in lipidomic research. Together with a software package, this method is a prime choice for global lipidomic analysis of even the most complex biological samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Strategy for alignment of electron beam trajectory in LEReC cooling section
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seletskiy, S.; Blaskiewicz, M.; Fedotov, A.
2016-09-23
We considered the steps required to align the electron beam trajectory through the LEReC cooling section. We devised a detailed procedure for the beam-based alignment of the cooling section solenoids. We showed that it is critical to have an individual control of each CS solenoid current. Finally, we modeled the alignment procedure and showed that with two BPM fitting the solenoid shift can be measured with 40 um accuracy and the solenoid inclination can be measured with 30 urad accuracy. These accuracies are well within the tolerances of the cooling section solenoid alignment.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
NASA Astrophysics Data System (ADS)
Liu, Chao; Yang, Guigeng; Zhang, Yiqun
2015-01-01
The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.
Rossini, Paolo M; Buscema, Massimo; Capriotti, Massimiliano; Grossi, Enzo; Rodriguez, Guido; Del Percio, Claudio; Babiloni, Claudio
2008-07-01
It has been shown that a new procedure (implicit function as squashing time, IFAST) based on artificial neural networks (ANNs) is able to compress eyes-closed resting electroencephalographic (EEG) data into spatial invariants of the instant voltage distributions for an automatic classification of mild cognitive impairment (MCI) and Alzheimer's disease (AD) subjects with classification accuracy of individual subjects higher than 92%. Here we tested the hypothesis that this is the case also for the classification of individual normal elderly (Nold) vs. MCI subjects, an important issue for the screening of large populations at high risk of AD. Eyes-closed resting EEG data (10-20 electrode montage) were recorded in 171 Nold and in 115 amnesic MCI subjects. The data inputs for the classification by IFAST were the weights of the connections within a nonlinear auto-associative ANN trained to generate the instant voltage distributions of 60-s artifact-free EEG data. The most relevant features were selected and coincidently the dataset was split into two halves for the final binary classification (training and testing) performed by a supervised ANN. The classification of the individual Nold and MCI subjects reached 95.87% of sensitivity and 91.06% of specificity (93.46% of accuracy). These results indicate that IFAST can reliably distinguish eyes-closed resting EEG in individual Nold and MCI subjects. IFAST may be used for large-scale periodic screening of large populations at risk of AD and personalized care.
Del Cura, Jose Luis; Coronado, Gloria; Zabala, Rosa; Korta, Igone; López, Ignacio
2018-01-31
To review the diagnostic accuracy of ultrasound-guided core-needle biopsy (CNB) in the diagnosis of salivary gland tumours (SGT). Retrospective, institutional review board approved, analysis of the CNB of SGT performed at our centre in 8 years. We used an automatic 18-G spring-loaded device. The final diagnosis was based on surgery in the cases that were operated on, and on clinical evolution and biopsy findings in the rest. Four hundred and nine biopsies were performed in 381 patients (ages, 2-97 years; mean, 55.9). There were two minor complications. Biopsy was diagnostic in 98.3%. There were eight false negatives. The diagnostic values for malignancy were: sensitivity 89.6%, specificity 100%, positive predictive value (PPV) 100% and negative predictive value (NPV) 98%. For the detection of neoplasms were: sensitivity 98.7%, specificity 99%, PPV 99.7% and VPN 96.1%. Accuracy of CNB in SGT is very high, with a very high sensitivity and an absolutely reliable diagnosis of malignancy. Complication rate is very low. It should be considered the technique of choice when a STG is detected. Normal tissue results warrant repeating biopsy. • Ultrasound-guided core-biopsy is the technique of choice in salivary glands nodules • Sensitivity, specificity for detecting neoplasms (which should be resected) are around 99% • Diagnosis of malignancy in core-biopsy is absolutely reliable • A CNB result of "normal tissue", however, warrants repeating the biopsy • Complication rate is very low.
Transferring elements of a density matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allahverdyan, Armen E.; Hovhannisyan, Karen V.; Yerevan State University, A. Manoogian Street 1, Yerevan
2010-01-15
We study restrictions imposed by quantum mechanics on the process of matrix-element transfer. This problem is at the core of quantum measurements and state transfer. Given two systems A and B with initial density matrices lambda and r, respectively, we consider interactions that lead to transferring certain matrix elements of unknown lambda into those of the final state r-tilde of B. We find that this process eliminates the memory on the transferred (or certain other) matrix elements from the final state of A. If one diagonal matrix element is transferred, r(tilde sign){sub aa}=lambda{sub aa}, the memory on each nondiagonal elementmore » lambda{sub an}ot ={sub b} is completely eliminated from the final density operator of A. Consider the following three quantities, Relambda{sub an}ot ={sub b}, Imlambda{sub an}ot ={sub b}, and lambda{sub aa}-lambda{sub bb} (the real and imaginary part of a nondiagonal element and the corresponding difference between diagonal elements). Transferring one of them, e.g., Rer(tilde sign){sub an}ot ={sub b}=Relambda{sub an}ot ={sub b}, erases the memory on two others from the final state of A. Generalization of these setups to a finite-accuracy transfer brings in a trade-off between the accuracy and the amount of preserved memory. This trade-off is expressed via system-independent uncertainty relations that account for local aspects of the accuracy-disturbance trade-off in quantum measurements. Thus, the general aspect of state disturbance in quantum measurements is elimination of memory on non-diagonal elements, rather than diagonalization.« less
Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.
Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran
2018-06-15
Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.
Sensitivity analysis of gene ranking methods in phenotype prediction.
deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan L; Sonis, Stephen T
2016-12-01
It has become clear that noise generated during the assay and analytical processes has the ability to disrupt accurate interpretation of genomic studies. Not only does such noise impact the scientific validity and costs of studies, but when assessed in the context of clinically translatable indications such as phenotype prediction, it can lead to inaccurate conclusions that could ultimately impact patients. We applied a sequence of ranking methods to damp noise associated with microarray outputs, and then tested the utility of the approach in three disease indications using publically available datasets. This study was performed in three phases. We first theoretically analyzed the effect of noise in phenotype prediction problems showing that it can be expressed as a modeling error that partially falsifies the pathways. Secondly, via synthetic modeling, we performed the sensitivity analysis for the main gene ranking methods to different types of noise. Finally, we studied the predictive accuracy of the gene lists provided by these ranking methods in synthetic data and in three different datasets related to cancer, rare and neurodegenerative diseases to better understand the translational aspects of our findings. In the case of synthetic modeling, we showed that Fisher's Ratio (FR) was the most robust gene ranking method in terms of precision for all the types of noise at different levels. Significance Analysis of Microarrays (SAM) provided slightly lower performance and the rest of the methods (fold change, entropy and maximum percentile distance) were much less precise and accurate. The predictive accuracy of the smallest set of high discriminatory probes was similar for all the methods in the case of Gaussian and Log-Gaussian noise. In the case of class assignment noise, the predictive accuracy of SAM and FR is higher. Finally, for real datasets (Chronic Lymphocytic Leukemia, Inclusion Body Myositis and Amyotrophic Lateral Sclerosis) we found that FR and SAM provided the highest predictive accuracies with the smallest number of genes. Biological pathways were found with an expanded list of genes whose discriminatory power has been established via FR. We have shown that noise in expression data and class assignment partially falsifies the sets of discriminatory probes in phenotype prediction problems. FR and SAM better exploit the principle of parsimony and are able to find subsets with less number of high discriminatory genes. The predictive accuracy and the precision are two different metrics to select the important genes, since in the presence of noise the most predictive genes do not completely coincide with those that are related to the phenotype. Based on the synthetic results, FR and SAM are recommended to unravel the biological pathways that are involved in the disease development. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.
Breast cancer detection via Hu moment invariant and feedforward neural network
NASA Astrophysics Data System (ADS)
Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah
2018-04-01
One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.
Note: Suppression of kHz-frequency switching noise in digital micro-mirror devices
NASA Astrophysics Data System (ADS)
Hueck, Klaus; Mazurenko, Anton; Luick, Niclas; Lompe, Thomas; Moritz, Henning
2017-01-01
High resolution digital micro-mirror devices (DMDs) make it possible to produce nearly arbitrary light fields with high accuracy, reproducibility, and low optical aberrations. However, using these devices to trap and manipulate ultracold atomic systems for, e.g., quantum simulation is often complicated by the presence of kHz-frequency switching noise. Here we demonstrate a simple hardware extension that solves this problem and makes it possible to produce truly static light fields. This modification leads to a 47 fold increase in the time that we can hold ultracold 6Li atoms in a dipole potential created with the DMD. Finally, we provide reliable and user friendly APIs written in Matlab and Python to control the DMD.
Hwang, Chang Yun; Song, Tae Jun; Moon, Sung-Hoon; Lee, Don; Park, Do Hyun; Seo, Dong Wan; Lee, Sung Koo; Kim, Myung-Hwan
2009-01-01
Background/Aims Although endoscopic ultrasound guided fine needle aspiration (EUS-FNA) has been introduced and its use has been increasing in Korea, there have not been many reports about its performance. The aim of this study was to assess the utility of EUS-FNA without on-site cytopathologist in establishing the diagnosis of solid pancreatic and peripancreatic masses from a single institution in Korea. Methods Medical records of 139 patients who underwent EUS-FNA for pancreatic and peripancreatic solid mass in the year 2007, were retrospectively reviewed. By comparing cytopathologic diagnosis of FNA with final diagnosis, sensitivity, specificity, and accuracy were determined, and factors influencing the accuracy as well as complications were analyzed. Results One hundred twenty out of 139 cases had final diagnosis of malignancy. Sensitivity, specificity, and accuracy of EUS-FNA were 82%, 89%, and 83%, respectively, and positive and negative predictive values were 100% and 46%, respectively. As for factors influencing the accuracy of FNA, lesion size was marginally significant (p-value 0.08) by multivariate analysis. Conclusions EUS-FNA performed without on-site cytopathologist was found to be accurate and safe, and thus EUS-FNA should be a part of the standard management algorithm for pancreatic and peripancreatic mass. PMID:20431733
Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A
2015-08-01
This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.
NASA Technical Reports Server (NTRS)
Simpson, Robert W.
1993-01-01
This presentation outlines a concept for an adaptive, interactive decision support system to assist controllers at a busy airport in achieving efficient use of multiple runways. The concept is being implemented as a computer code called FASA (Final Approach Spacing for Aircraft), and will be tested and demonstrated in ATCSIM, a high fidelity simulation of terminal area airspace and airport surface operations. Objectives are: (1) to provide automated cues to assist controllers in the sequencing and spacing of landing and takeoff aircraft; (2) to provide the controller with a limited ability to modify the sequence and spacings between aircraft, and to insert takeoffs and missed approach aircraft in the landing flows; (3) to increase spacing accuracy using more complex and precise separation criteria while reducing controller workload; and (4) achieve higher operational takeoff and landing rates on multiple runways in poor visibility.
[Evaluation of accuracy of virtual occlusal definition in Angle class I molar relationship].
Wu, L; Liu, X J; Li, Z L; Wang, X
2018-02-18
To evaluate the accuracy of virtual occlusal definition in non-Angle class I molar relationship, and to evaluate the clinical feasibility. Twenty pairs of models of orthognathic patients were included in this study. The inclusion criteria were: (1) finished with pre-surgical orthodontic treatment and (2) stable final occlusion. The exclusion criteria were: (1) existence of distorted teeth, (2) needs for segmentation, (3) defect of dentition except for orthodontic extraction ones, and (4) existence of tooth space. The tooth-extracted test group included 10 models with two premolars extracted during preoperative orthodontic treatment. Their molar relationships were not Angle class I relationship. The non-tooth-extracted test group included another 10 models without teeth extracted, therefore their molar relationships were Angle class I. To define the final occlusion in virtual environment, two steps were included: (1) The morphology data of upper and lower dentition were digitalized by surface scanner (Smart Optics/Activity 102; Model-Tray GmbH, Hamburg, Germany); (2) the virtual relationships were defined using 3Shape software. The control standard of final occlusion was manually defined using gypsum models and then digitalized by surface scanner. The final occlusion of test group and control standard were overlapped according to lower dentition morphology. Errors were evaluated by calculating the distance between the corresponding reference points of testing group and control standard locations. The overall errors for upper dentition between test group and control standard location were (0.51±0.18) mm in non-tooth-extracted test group and (0.60±0.36) mm in tooth-extracted test group. The errors were significantly different between these two test groups (P<0.05). However, in both test groups, the errors of each tooth in a single dentition does not differ from one another. There was no significant difference between errors in tooth-extracted test group and 1 mm (P>0.05); and the accuracy of non-tooth-extracted group was significantly smaller than 1 mm (P<0.05). The error of virtual occlusal definition of none class I molar relationship is higher than that of class I relationship, with an accuracy of 1 mm. However, its accuracy is still feasible for clinical application.
Impact of Simulation Technology on Die and Stamping Business
NASA Astrophysics Data System (ADS)
Stevens, Mark W.
2005-08-01
Over the last ten years, we have seen an explosion in the use of simulation-based techniques to improve the engineering, construction, and operation of GM production tools. The impact has been as profound as the overall switch to CAD/CAM from the old manual design and construction methods. The changeover to N/C machining from duplicating milling machines brought advances in accuracy and speed to our construction activity. It also brought significant reductions in fitting sculptured surfaces. Changing over to CAD design brought similar advances in accuracy, and today's use of solid modeling has enhanced that accuracy gain while finally leading to the reduction in lead time and cost through the development of parametric techniques. Elimination of paper drawings for die design, along with the process of blueprinting and distribution, provided the savings required to install high capacity computer servers, high-speed data transmission lines and integrated networks. These historic changes in the application of CAE technology in manufacturing engineering paved the way for the implementation of simulation to all aspects of our business. The benefits are being realized now, and the future holds even greater promise as the simulation techniques mature and expand. Every new line of dies is verified prior to casting for interference free operation. Sheet metal forming simulation validates the material flow, eliminating the high costs of physical experimentation dependent on trial and error methods of the past. Integrated forming simulation and die structural analysis and optimization has led to a reduction in die size and weight on the order of 30% or more. The latest techniques in factory simulation enable analysis of automated press lines, including all stamping operations with corresponding automation. This leads to manufacturing lines capable of running at higher levels of throughput, with actual results providing the capability of two or more additional strokes per minute. As we spread these simulation techniques to the balance of our business, from blank de-stacking to the racking of parts, we anticipate continued reduction in lead-time and engineering expense while improving quality and start-up execution. The author will provide an overview of technology and business evolution of the math-based process that brought an historical transition and revitalization to the die and stamping industry in the past decade. Finally, the author will give an outlook for future business needs and technology development directions.
A simplified lumped model for the optimization of post-buckled beam architecture wideband generator
NASA Astrophysics Data System (ADS)
Liu, Weiqun; Formosa, Fabien; Badel, Adrien; Hu, Guangdi
2017-11-01
Buckled beams structures are a classical kind of bistable energy harvesters which attract more and more interests because of their capability to scavenge energy over a large frequency band in comparison with linear generator. The usual modeling approach uses the Galerkin mode discretization method with relatively high complexity, while the simplification with a single-mode solution lacks accuracy. It stems on the optimization of the energy potential features to finally define the physical and geometrical parameters. Therefore, in this paper, a simple lumped model is proposed with explicit relationship between the potential shape and parameters to allow efficient design of bistable beams based generator. The accuracy of the approximation model is studied with the effectiveness of application analyzed. Moreover, an important fact, that the bending stiffness has little influence on the potential shape with low buckling level and the sectional area determined, is found. This feature extends the applicable range of the model by utilizing the design of high moment of inertia. Numerical investigations demonstrate that the proposed model is a simple and reliable tool for design. An optimization example of using the proposed model is demonstrated with satisfactory performance.
Automatic breast tissue density estimation scheme in digital mammography images
NASA Astrophysics Data System (ADS)
Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero
2017-03-01
Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.
Superpixel-based segmentation of glottal area from videolaryngoscopy images
NASA Astrophysics Data System (ADS)
Turkmen, H. Irem; Albayrak, Abdulkadir; Karsligil, M. Elif; Kocak, Ismail
2017-11-01
Segmentation of the glottal area with high accuracy is one of the major challenges for the development of systems for computer-aided diagnosis of vocal-fold disorders. We propose a hybrid model combining conventional methods with a superpixel-based segmentation approach. We first employed a superpixel algorithm to reveal the glottal area by eliminating the local variances of pixels caused by bleedings, blood vessels, and light reflections from mucosa. Then, the glottal area was detected by exploiting a seeded region-growing algorithm in a fully automatic manner. The experiments were conducted on videolaryngoscopy images obtained from both patients having pathologic vocal folds as well as healthy subjects. Finally, the proposed hybrid approach was compared with conventional region-growing and active-contour model-based glottal area segmentation algorithms. The performance of the proposed method was evaluated in terms of segmentation accuracy and elapsed time. The F-measure, true negative rate, and dice coefficients of the hybrid method were calculated as 82%, 93%, and 82%, respectively, which are superior to the state-of-art glottal-area segmentation methods. The proposed hybrid model achieved high success rates and robustness, making it suitable for developing a computer-aided diagnosis system that can be used in clinical routines.
NASA Technical Reports Server (NTRS)
Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.
1996-01-01
The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.
Zhang, Yihui; Webb, Richard Chad; Luo, Hongying; Xue, Yeguang; Kurniawan, Jonas; Cho, Nam Heon; Krishnan, Siddharth; Li, Yuhang; Huang, Yonggang
2016-01-01
Long-term, continuous measurement of core body temperature is of high interest, due to the widespread use of this parameter as a key biomedical signal for clinical judgment and patient management. Traditional approaches rely on devices or instruments in rigid and planar forms, not readily amenable to intimate or conformable integration with soft, curvilinear, time-dynamic, surfaces of the skin. Here, materials and mechanics designs for differential temperature sensors are presented which can attach softly and reversibly onto the skin surface, and also sustain high levels of deformation (e.g., bending, twisting, and stretching). A theoretical approach, together with a modeling algorithm, yields core body temperature from multiple differential measurements from temperature sensors separated by different effective distances from the skin. The sensitivity, accuracy, and response time are analyzed by finite element analyses (FEA) to provide guidelines for relationships between sensor design and performance. Four sets of experiments on multiple devices with different dimensions and under different convection conditions illustrate the key features of the technology and the analysis approach. Finally, results indicate that thermally insulating materials with cellular structures offer advantages in reducing the response time and increasing the accuracy, while improving the mechanics and breathability. PMID:25953120
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
A new family of high-order compact upwind difference schemes with good spectral resolution
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Yao, Zhaohui; He, Feng; Shen, M. Y.
2007-12-01
This paper presents a new family of high-order compact upwind difference schemes. Unknowns included in the proposed schemes are not only the values of the function but also those of its first and higher derivatives. Derivative terms in the schemes appear only on the upwind side of the stencil. One can calculate all the first derivatives exactly as one solves explicit schemes when the boundary conditions of the problem are non-periodic. When the proposed schemes are applied to periodic problems, only periodic bi-diagonal matrix inversions or periodic block-bi-diagonal matrix inversions are required. Resolution optimization is used to enhance the spectral representation of the first derivative, and this produces a scheme with the highest spectral accuracy among all known compact schemes. For non-periodic boundary conditions, boundary schemes constructed in virtue of the assistant scheme make the schemes not only possess stability for any selective length scale on every point in the computational domain but also satisfy the principle of optimal resolution. Also, an improved shock-capturing method is developed. Finally, both the effectiveness of the new hybrid method and the accuracy of the proposed schemes are verified by executing four benchmark test cases.
Asadi, Hamed; Kok, Hong Kuan; Looby, Seamus; Brennan, Paul; O'Hare, Alan; Thornton, John
2016-12-01
To identify factors influencing outcome in brain arteriovenous malformations (BAVM) treated with endovascular embolization. We also assessed the feasibility of using machine learning techniques to prognosticate and predict outcome and compared this to conventional statistical analyses. A retrospective study of patients undergoing endovascular treatment of BAVM during a 22-year period in a national neuroscience center was performed. Clinical presentation, imaging, procedural details, complications, and outcome were recorded. The data was analyzed with artificial intelligence techniques to identify predictors of outcome and assess accuracy in predicting clinical outcome at final follow-up. One-hundred ninety-nine patients underwent treatment for BAVM with a mean follow-up duration of 63 months. The commonest clinical presentation was intracranial hemorrhage (56%). During the follow-up period, there were 51 further hemorrhagic events, comprising spontaneous hemorrhage (n = 27) and procedural related hemorrhage (n = 24). All spontaneous events occurred in previously embolized BAVMs remote from the procedure. Complications included ischemic stroke in 10%, symptomatic hemorrhage in 9.8%, and mortality rate of 4.7%. Standard regression analysis model had an accuracy of 43% in predicting final outcome (mortality), with the type of treatment complication identified as the most important predictor. The machine learning model showed superior accuracy of 97.5% in predicting outcome and identified the presence or absence of nidal fistulae as the most important factor. BAVMs can be treated successfully by endovascular techniques or combined with surgery and radiosurgery with an acceptable risk profile. Machine learning techniques can predict final outcome with greater accuracy and may help individualize treatment based on key predicting factors. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
de Argandoña, Eneko Saenz; Mendiguren, Joseba; Otero, Irune; Mugarra, Endika; Otegi, Nagore; Galdos, Lander
2018-05-01
Steel has been used in vehicles from the automotive industry's inception. Different steel grades are continually being developed in order to satisfy new fuel economy requirements. For example, advanced high strength steel grades (AHSS) are widely used due to their good strength/weight ratio. Because each steel grade has a different microstructure composition and hardness, they show different behaviors when they are subjected to different strain paths. Similarly, the friction behavior when using different contact pressures is considerably altered. In the present paper, four different steel grades, ZSt380, DP600, DP780 and Fortiform 1050 materials are deeply characterized using uniaxial and cyclic tension-compression tests. Coefficient of friction (COF) is also obtained using strip drawing tests. These results have been used to calibrate mixed kinematic-hardening material models as well as pressure dependent friction models. Finally, the geometrical accuracy of the different material and friction models has been evaluated by comparing the numerical predictions with experimental demonstrators obtained using a U-Drawing tester.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
NASA Astrophysics Data System (ADS)
Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran
2018-05-01
High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of ±200 nm over the full 200 mm measurement range of the sensor.
Modeling the compliance of polyurethane nanofiber tubes for artificial common bile duct
NASA Astrophysics Data System (ADS)
Moazeni, Najmeh; Vadood, Morteza; Semnani, Dariush; Hasani, Hossein
2018-02-01
The common bile duct is one of the body’s most sensitive organs and a polyurethane nanofiber tube can be used as a prosthetic of the common bile duct. The compliance is one of the most important properties of prosthetic which should be adequately compliant as long as possible to keep the behavioral integrity of prosthetic. In the present paper, the prosthetic compliance was measured and modeled using regression method and artificial neural network (ANN) based on the electrospinning process parameters such as polymer concentration, voltage, tip-to-collector distance and flow rate. Whereas, the ANN model contains different parameters affecting on the prediction accuracy directly, the genetic algorithm (GA) was used to optimize the ANN parameters. Finally, it was observed that the optimized ANN model by GA can predict the compliance with high accuracy (mean absolute percentage error = 8.57%). Moreover, the contribution of variables on the compliance was investigated through relative importance analysis and the optimum values of parameters for ideal compliance were determined.
Using Affordable Data Capturing Devices for Automatic 3d City Modelling
NASA Astrophysics Data System (ADS)
Alizadehashrafi, B.; Abdul-Rahman, A.
2017-11-01
In this research project, many movies from UTM Kolej 9, Skudai, Johor Bahru (See Figure 1) were taken by AR. Drone 2. Since the AR drone 2.0 has liquid lens, while flying there were significant distortions and deformations on the converted pictures of the movies. Passive remote sensing (RS) applications based on image matching and Epipolar lines such as Agisoft PhotoScan have been tested to create the point clouds and mesh along with 3D models and textures. As the result was not acceptable (See Figure 2), the previous Dynamic Pulse Function based on Ruby programming language were enhanced and utilized to create the 3D models automatically in LoD3. The accuracy of the final 3D model is almost 10 to 20 cm. After rectification and parallel projection of the photos based on some tie points and targets, all the parameters were measured and utilized as an input to the system to create the 3D model automatically in LoD3 in a very high accuracy.
Outer planet mission guidance and navigation for spinning spacecraft
NASA Technical Reports Server (NTRS)
Paul, C. K.; Russell, R. K.; Ellis, J.
1974-01-01
The orbit determination accuracies, maneuver results, and navigation system specification for spinning Pioneer planetary probe missions are analyzed to aid in determining the feasibility of deploying probes into the atmospheres of the outer planets. Radio-only navigation suffices for a direct Saturn mission and the Jupiter flyby of a Jupiter/Uranus mission. Saturn ephemeris errors (1000 km) plus rigid entry constraints at Uranus result in very high velocity requirements (140 m/sec) on the final legs of the Saturn/Uranus and Jupiter/Uranus missions if only Earth-based tracking is employed. The capabilities of a conceptual V-slit sensor are assessed to supplement radio tracking by star/satellite observations. By processing the optical measurements with a batch filter, entry conditions at Uranus can be controlled to acceptable mission-defined levels (+ or - 3 deg) and the Saturn-Uranus leg velocity requirements can be reduced by a factor of 6 (from 139 to 23 m/sec) if nominal specified accuracies of the sensor can be realized.
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-11-21
In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less
Cost Factors in Scaling in SfM Collections and Processing Solutions
NASA Astrophysics Data System (ADS)
Cherry, J. E.
2015-12-01
In this talk I will discuss the economics of scaling Structure from Motion (SfM)-style collections from 1 km2 and below to 100's and 1000's of square kilometers. Considerations include the costs of the technical equipment: comparisons of small, medium, and large-format camera systems, as well as various GPS-INS systems and their impact on processing accuracy for various Ground Sampling Distances. Tradeoffs between camera formats and flight time are central. Weather conditions and planning high altitude versus low altitude flights are another economic factor, particularly in areas of persistently bad weather and in areas where ground logistics (i.e. hotel rooms and pilot incidentals) are expensive. Unique costs associated with UAS collections and experimental payloads will be discussed. Finally, the costs of equipment and labor differs in SfM processing than in conventional orthomosaic and LiDAR processing. There are opportunities for 'economies of scale' in SfM collections under certain circumstances but whether the accuracy specifications are firm/fixed or 'best effort' makes a difference.
Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery
NASA Astrophysics Data System (ADS)
Li, Z.; Cai, G.; Ren, H.
2018-04-01
There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.
Efficient Wide Baseline Structure from Motion
NASA Astrophysics Data System (ADS)
Michelini, Mario; Mayer, Helmut
2016-06-01
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
Study on Octahedral Spherical Hohlraum
NASA Astrophysics Data System (ADS)
Lan, Ke; Liu, Jie; Huo, Wenyi; Li, Zhichao; Yang, Dong; Li, Sanwei; Ren, Guoli; Chen, Yaohua; Jiang, Shaoen; He, Xian-Tu; Zhang, Weiyan
2015-11-01
In this talk, we report our recent study on octahedral spherical hohlraum which has six laser entrance holes (LEHs). First, our study shows that the octahedral hohlraums have robust high symmetry during the capsule implosion at hohlraum-to- capsule radius ratio larger than 3.7 and have potential superiority on low backscatter without supplementary technology. Second, we study the laser arrangement and constraints of the octahedral hohlraums and give their laser arrangement design for ignition facility. Third, we propose a novel octahedral hohlraum with LEH shields and cylindrical LEHs, in order to increase the laser coupling efficiency and improve the capsule symmetry and to mitigate the influence of the wall blowoff on laser transport. Fourth, we study the sensitivity of capsule symmetry inside the octahedral hohlraums to laser power balance, pointing accuracy, deviations from the optimal position and target fabrication accuracy, and compare the results with that of tradiational cylinders and rugby hohlraums. Finally, we present our recent experimental studies on the octahedral hohlraums on SGIII prototype laser facility.
"Frequent frames" in German child-directed speech: a limited cue to grammatical categories.
Stumper, Barbara; Bannard, Colin; Lieven, Elena; Tomasello, Michael
2011-08-01
Mintz (2003) found that in English child-directed speech, frequently occurring frames formed by linking the preceding (A) and succeeding (B) word (A_x_B) could accurately predict the syntactic category of the intervening word (x). This has been successfully extended to French (Chemla, Mintz, Bernal, & Christophe, 2009). In this paper, we show that, as for Dutch (Erkelens, 2009), frequent frames in German do not enable such accurate lexical categorization. This can be explained by the characteristics of German including a less restricted word order compared to English or French and the frequent use of some forms as both determiner and pronoun in colloquial German. Finally, we explore the relationship between the accuracy of frames and their potential utility and find that even some of those frames showing high token-based accuracy are of limited value because they are in fact set phrases with little or no variability in the slot position. Copyright © 2011 Cognitive Science Society, Inc.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
NASA Astrophysics Data System (ADS)
Qu, Hongquan; Yuan, Shijiao; Wang, Yanping; Yang, Dan
2018-04-01
To improve the recognition performance of optical fiber prewarning system (OFPS), this study proposed a hierarchical recognition algorithm (HRA). Compared with traditional methods, which employ only a complex algorithm that includes multiple extracted features and complex classifiers to increase the recognition rate with a considerable decrease in recognition speed, HRA takes advantage of the continuity of intrusion events, thereby creating a staged recognition flow inspired by stress reaction. HRA is expected to achieve high-level recognition accuracy with less time consumption. First, this work analyzed the continuity of intrusion events and then presented the algorithm based on the mechanism of stress reaction. Finally, it verified the time consumption through theoretical analysis and experiments, and the recognition accuracy was obtained through experiments. Experiment results show that the processing speed of HRA is 3.3 times faster than that of a traditional complicated algorithm and has a similar recognition rate of 98%. The study is of great significance to fast intrusion event recognition in OFPS.
Soh, Jae Seung; Lee, Ho-Su; Lee, Seohyun; Bae, Jungho; Lee, Hyo Jeong; Park, Sang Hyoung; Yang, Dong-Hoon; Kim, Kyung-Jo; Ye, Byong Duk; Myung, Seung-Jae; Yang, Suk-Kyun; Kim, Jin-Ho
2015-01-01
Background/Aims Endoscopic ultrasound-guided fine needle aspiration and/or biopsy (EUS-FNA/B) have been used to diagnose subepithelial tumors (SETs) and extraluminal lesions in the gastrointestinal tract. Our group previously reported the usefulness of EUS-FNA/B for rectal and perirectal lesions. This study reports our expanded experience with EUS-FNA/B for rectal and perirectal lesions in terms of diagnostic accuracy and safety. We also included our new experience with EUS-FNB using the recently introduced ProCore needle. Methods From April 2009 to March 2014, EUS-FNA/B for rectal and perirectal lesions was performed in 30 consecutive patients. We evaluated EUS-FNA/B performance by comparing histological diagnoses with final results. We also investigated factors affecting diagnostic accuracy. Results Among 10 patients with SETs, EUS-FNA/B specimen results revealed a gastrointestinal stromal tumor in 4 patients and malignant lymphoma in 1 patient. The diagnostic accuracy of EUS-FNA/B was 50% for SETs (5/10). Among 20 patients with non-SET lesions, 8 patients were diagnosed with malignant disease and 7 were diagnosed with benign disease based on both EUS-FNA/B and the final results. The diagnostic accuracy of EUS-FNA/B for non-SET lesions was 75% (15/20). The size of lesions was the only factor related to diagnostic accuracy (P=0.027). Two complications of mild fever and asymptomatic pneumoperitoneum occurred after EUS-FNA/B. Conclusions The overall diagnostic accuracy of EUS-FNA/B for rectal and perirectal lesions was 67% (20/30). EUS-FNA/B is a clinically useful method for cytological and histological diagnoses of rectal and perirectal lesions. PMID:25931998
Nonlinear Legendre Spectral Finite Elements for Wind Turbine Blade Dynamics: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Q.; Sprague, M. A.; Jonkman, J.
2014-01-01
This paper presents a numerical implementation and examination of new wind turbine blade finite element model based on Geometrically Exact Beam Theory (GEBT) and a high-order spectral finite element method. The displacement-based GEBT is presented, which includes the coupling effects that exist in composite structures and geometric nonlinearity. Legendre spectral finite elements (LSFEs) are high-order finite elements with nodes located at the Gauss-Legendre-Lobatto points. LSFEs can be an order of magnitude more efficient that low-order finite elements for a given accuracy level. Interpolation of the three-dimensional rotation, a major technical barrier in large-deformation simulation, is discussed in the context ofmore » LSFEs. It is shown, by numerical example, that the high-order LSFEs, where weak forms are evaluated with nodal quadrature, do not suffer from a drawback that exists in low-order finite elements where the tangent-stiffness matrix is calculated at the Gauss points. Finally, the new LSFE code is implemented in the new FAST Modularization Framework for dynamic simulation of highly flexible composite-material wind turbine blades. The framework allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples showing validation and LSFE performance will be provided in the final paper.« less
On the accuracy of ERS-1 orbit predictions
NASA Technical Reports Server (NTRS)
Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.
1993-01-01
Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation.
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-04-05
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-01-01
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178
Cantiello, Francesco; Gangemi, Vincenzo; Cascini, Giuseppe Lucio; Calabria, Ferdinando; Moschini, Marco; Ferro, Matteo; Musi, Gennaro; Butticè, Salvatore; Salonia, Andrea; Briganti, Alberto; Damiano, Rocco
2017-08-01
To assess the diagnostic accuracy of 64 Copper prostate-specific membrane antigen ( 64 Cu-PSMA) positron emission tomography/computed tomography (PET/CT) in the primary lymph node (LN) staging of a selected cohort of intermediate- to high-risk prostate cancer (PCa) patients. An observational prospective study was performed in 23 patients with intermediate- to high-risk PCa, who underwent 64 Cu-PSMA PET/CT for local and lymph nodal staging before laparoscopic radical prostatectomy with an extended pelvic LN dissection. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for LN status of 64 Cu-PSMA PET/CT were calculated using the final pathological findings as reference. Furthermore, we evaluated the correlation of intraprostatic tumor extent and grading with 64 Cu-PSMA intraprostatic distribution. Pathological analysis of LN involvement in 413 LNs harvested from our study cohort identified a total of 22 LN metastases in 8 (5%) of the 23 (35%) PCa patients. Imaging-based LN staging in a per-patient analysis showed that 64 Cu-PSMA PET/CT was positive in 7 of 8 LN-positive patients (22%) with a sensitivity of 87.5%, specificity of 100%, PPV of 100%, and NPV of 93.7%, considering the maximum standardized uptake value (SUV max ) at 4 hours as our reference. Receiver operating characteristic curve was characterized by an area under the curve of 0.938. A significant positive association was observed between SUV max at 4 hours with Gleason score, index, and cumulative tumor volume. In our intermediate- to high-risk PCa patients study cohort, we showed the high diagnostic accuracy of 64 Cu-PSMA PET/CT for primary LN staging before radical prostatectomy. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, F.; Cai, X.; Tan, W.
2017-09-01
Within-class spectral variation and between-class spectral confusion in remotely sensed imagery degrades the performance of built-up area detection when using planar texture, shape, and spectral features. Terrain slope and building height are often used to optimize the results, but extracted from auxiliary data (e.g. LIDAR data, DSM). Moreover, the auxiliary data must be acquired around the same time as image acquisition. Otherwise, built-up area detection accuracy is affected. Stereo imagery incorporates both planar and height information unlike single remotely sensed images. Stereo imagery acquired by many satellites (e.g. Worldview-4, Pleiades-HR, ALOS-PRISM, and ZY-3) can be used as data source of identifying built-up areas. A new method of identifying high-accuracy built-up areas from stereo imagery is achieved by using a combination of planar and height features. The digital surface model (DSM) and digital orthophoto map (DOM) are first generated from stereo images. Then, height values of above-ground objects (e.g. buildings) are calculated from the DSM, and used to obtain raw built-up areas. Other raw built-up areas are obtained from the DOM using Pantex and Gabor, respectively. Final high-accuracy built-up area results are achieved from these raw built-up areas using the decision level fusion. Experimental results show that accurate built-up areas can be achieved from stereo imagery. The height information used in the proposed method is derived from stereo imagery itself, with no need to require auxiliary height data (e.g. LIDAR data). The proposed method is suitable for spaceborne and airborne stereo pairs and triplets.
Lidar detection of underwater objects using a neuro-SVM-based architecture.
Mitra, Vikramjit; Wang, Chia-Jiu; Banerjee, Satarupa
2006-05-01
This paper presents a neural network architecture using a support vector machine (SVM) as an inference engine (IE) for classification of light detection and ranging (Lidar) data. Lidar data gives a sequence of laser backscatter intensities obtained from laser shots generated from an airborne object at various altitudes above the earth surface. Lidar data is pre-filtered to remove high frequency noise. As the Lidar shots are taken from above the earth surface, it has some air backscatter information, which is of no importance for detecting underwater objects. Because of these, the air backscatter information is eliminated from the data and a segment of this data is subsequently selected to extract features for classification. This is then encoded using linear predictive coding (LPC) and polynomial approximation. The coefficients thus generated are used as inputs to the two branches of a parallel neural architecture. The decisions obtained from the two branches are vector multiplied and the result is fed to an SVM-based IE that presents the final inference. Two parallel neural architectures using multilayer perception (MLP) and hybrid radial basis function (HRBF) are considered in this paper. The proposed structure fits the Lidar data classification task well due to the inherent classification efficiency of neural networks and accurate decision-making capability of SVM. A Bayesian classifier and a quadratic classifier were considered for the Lidar data classification task but they failed to offer high prediction accuracy. Furthermore, a single-layered artificial neural network (ANN) classifier was also considered and it failed to offer good accuracy. The parallel ANN architecture proposed in this paper offers high prediction accuracy (98.9%) and is found to be the most suitable architecture for the proposed task of Lidar data classification.
Playing Chemical Plant Environmental Protection Games with Historical Monitoring Data
Reniers, Genserik; Zhang, Laobing; Qiu, Xiaogang
2017-01-01
The chemical industry is very important for the world economy and this industrial sector represents a substantial income source for developing countries. However, existing regulations on controlling atmospheric pollutants, and the enforcement of these regulations, often are insufficient in such countries. As a result, the deterioration of surrounding ecosystems and a quality decrease of the atmospheric environment can be observed. Previous works in this domain fail to generate executable and pragmatic solutions for inspection agencies due to practical challenges. In addressing these challenges, we introduce a so-called Chemical Plant Environment Protection Game (CPEP) to generate reasonable schedules of high-accuracy air quality monitoring stations (i.e., daily management plans) for inspection agencies. First, so-called Stackelberg Security Games (SSGs) in conjunction with source estimation methods are applied into this research. Second, high-accuracy air quality monitoring stations as well as gas sensor modules are modeled in the CPEP game. Third, simplified data analysis on the regularly discharging of chemical plants is utilized to construct the CPEP game. Finally, an illustrative case study is used to investigate the effectiveness of the CPEP game, and a realistic case study is conducted to illustrate how the models and algorithms being proposed in this paper, work in daily practice. Results show that playing a CPEP game can reduce operational costs of high-accuracy air quality monitoring stations. Moreover, evidence suggests that playing the game leads to more compliance from the chemical plants towards the inspection agencies. Therefore, the CPEP game is able to assist the environmental protection authorities in daily management work and reduce the potential risks of gaseous pollutants dispersion incidents. PMID:28961188
Chaux, Alcides
2015-10-01
To evaluate the accuracy of previously published risk group systems for predicting inguinal nodal metastases in patients with penile carcinoma. Two hundred three cases of invasive penile squamous cell carcinomas (SCC) were stratified using the following systems: Solsona et al (J Urol 2001;165:1509), Hungerhuber et al (Urology 2006;68:621), and the system proposed by the European Association of Urology (EAU; Eur Urol 2004;46:1). Receiver operating characteristic (ROC) analysis was carried out to compare accuracy in predicting final nodal status and cancer-related death. Most of cases were pT2/pT3 high-grade tumors with a small percentage of low-grade pT1 carcinomas. The metastatic rates for the Solsona et al, EAU, and Hungerhuber et al systems in the high-risk category were 15 of 73 (21%), 16 of 103 (16%), and 10 of 35 (29%) in patients with clinically negative inguinal lymph nodes and 52 of 75 (69%), 55 of 93 (59%), and 34 of 47 (72%) in patients with palpable inguinal lymph nodes, respectively. Performance by ROC analysis showed a low accuracy for all stratification systems although the Solsona et al and the Hungerhuber et al systems performed better than the EAU system. Patients in intermediate-risk categories and with clinically palpable inguinal lymph nodes were more likely to have nodal metastasis than patients with clinically negative lymph nodes in the same category. These stratification systems may be useful for patients with low-grade superficial tumors and less accurate for evaluating patients with high-grade locally advanced penile carcinomas. These data may be useful for therapeutic planning of patients with penile SCC. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogawa, Kinya; Kobayashi, Hidetoshi; Sugiyama, Fumiko; Horikawa, Keitaro
Thermal activation theory is well-known to be a useful theory to explain the mechanical behaviour of various metals in the wide range of temperature and strain-rate. In this study, a number of trials to obtain the lower yield stress or flow stress at high strain rates from quasi-static data were carried out using the data shown in the report titled “The final report of research group on high-speed deformation of steels for automotive use”. A relation between the thermal component of stress and the strain rate obtained from experiments for αFe and the temperature-strain rate parameter were used with thermal activation theory. The predictions were successfully performed and they showed that the stress-strain behaviour at high strain rates can be evaluated from quasi-static data with good accuracy.
Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye
2017-01-01
The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758
Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images
Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle
2008-01-01
The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
DNS of Flow in a Low-Pressure Turbine Cascade Using a Discontinuous-Galerkin Spectral-Element Method
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo Tibor; Murman, Scott; Madavan, Nateri
2015-01-01
A new computational capability under development for accurate and efficient high-fidelity direct numerical simulation (DNS) and large eddy simulation (LES) of turbomachinery is described. This capability is based on an entropy-stable Discontinuous-Galerkin spectral-element approach that extends to arbitrarily high orders of spatial and temporal accuracy and is implemented in a computationally efficient manner on a modern high performance computer architecture. A validation study using this method to perform DNS of flow in a low-pressure turbine airfoil cascade are presented. Preliminary results indicate that the method captures the main features of the flow. Discrepancies between the predicted results and the experiments are likely due to the effects of freestream turbulence not being included in the simulation and will be addressed in the final paper.
NASA Astrophysics Data System (ADS)
Krzan, Grzegorz; Stępniak, Katarzyna
2017-09-01
In high-accuracy positioning using GNSS, the most common solution is still relative positioning using double-difference observations of dual-frequency measurements. An increasingly popular alternative to relative positioning are undifferenced approaches, which are designed to make full use of modern satellite systems and signals. Positions referenced to global International Terrestrial Reference Frame (ITRF2008) obtained from Precise Point Positioning (PPP) or Undifferenced (UD) network solutions have to be transformed to national (regional) reference frame, which introduces additional bases related to the transformation process. In this paper, satellite observations from two test networks using different observation time series were processed. The first test concerns the positioning accuracy from processing one year of dual-frequency GPS observations from 14 EUREF Permanent Network (EPN) stations using NAPEOS 3.3.1 software. The results were transformed into a national reference frame (PL-ETRF2000) and compared to positions from an EPN cumulative solution, which was adopted as the true coordinates. Daily observations were processed using PPP and UD multi-station solutions to determine the final accuracy resulting from satellite positioning, the transformation to national coordinate systems and Eurasian intraplate plate velocities. The second numerical test involved similar processing strategies of post-processing carried out using different observation time series (30 min., 1 hour, 2 hours, daily) and different classes of GNSS receivers. The centimeter accuracy of results presented in the national coordinate system satisfies the requirements of many surveying and engineering applications.
Jeon, Hyo Keun; Ryu, Ho Yoel; Cho, Mee Yon; Kim, Hyun-Soo; Kim, Jae Woo; Park, Hong Jun; Kim, Moon Young; Baik, Soon Koo; Kwon, Sang Ok; Park, Su Yeon; Won, Sung Ho
2014-10-01
Larger biopsy specimens or increasing the number of biopsies may improve the diagnostic accuracy of gastric epithelial neoplasia (GEN). The aims of this study was to compare the diagnostic accuracies between conventional and jumbo forceps biopsy of GEN before endoscopic submucosal dissection (ESD) and to confirm that increasing the number of biopsies is useful for the diagnosis of GEN. The concordance rate between EFB and ESD specimens was not significantly different between the two groups [83.1 % (54/65) in JG vs. 79.1 % (53/67) in CG]. On multivariate analyses, two or four EFBs significantly increased the cumulating concordance rate [coefficients; twice: 5.1 (P = 0.01), four times: 5.9 (P = 0.02)]. But, the concordance rate was decreased in high grade dysplasia (coefficient -40.32, P = 0.006). One hundred and sixty GENs from 148 patients were randomized into two groups and finally 67 GENs in 61 patients and 65 GENs in 63 patients were allocated to the conventional group (CG) or jumbo group (JG), respectively. Four endoscopic forceps biopsy (EFB) specimens were obtained from each lesion with conventional (6.8 mm) forceps or jumbo (8 mm) forceps. The histological concordance rate between 4 EFB specimens and ESD specimens was investigated in the two groups. Before ESD, the diagnostic accuracy of GENs was significantly increased not by the use of jumbo forceps biopsy but by increasing the number of biopsies.
Automated dental implantation using image-guided robotics: registration results.
Sun, Xiaoyan; McKenzie, Frederic D; Bawab, Sebastian; Li, Jiang; Yoon, Yongki; Huang, Jen-K
2011-09-01
One of the most important factors affecting the outcome of dental implantation is the accurate insertion of the implant into the patient's jaw bone, which requires a high degree of anatomical accuracy. With the accuracy and stability of robots, image-guided robotics is expected to provide more reliable and successful outcomes for dental implantation. Here, we proposed the use of a robot for drilling the implant site in preparation for the insertion of the implant. An image-guided robotic system for automated dental implantation is described in this paper. Patient-specific 3D models are reconstructed from preoperative Cone-beam CT images, and implantation planning is performed with these virtual models. A two-step registration procedure is applied to transform the preoperative plan of the implant insertion into intra-operative operations of the robot with the help of a Coordinate Measurement Machine (CMM). Experiments are carried out with a phantom that is generated from the patient-specific 3D model. Fiducial Registration Error (FRE) and Target Registration Error (TRE) values are calculated to evaluate the accuracy of the registration procedure. FRE values are less than 0.30 mm. Final TRE values after the two-step registration are 1.42 ± 0.70 mm (N = 5). The registration results of an automated dental implantation system using image-guided robotics are reported in this paper. Phantom experiments show that the practice of robot in the dental implantation is feasible and the system accuracy is comparable to other similar systems for dental implantation.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
An adaptive deep Q-learning strategy for handwritten digit recognition.
Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min
2018-02-22
Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deep learning as a tool to distinguish between high orbital angular momentum optical modes
NASA Astrophysics Data System (ADS)
Knutson, E. M.; Lohani, Sanjaya; Danaci, Onur; Huver, Sean D.; Glasser, Ryan T.
2016-09-01
The generation of light containing large degrees of orbital angular momentum (OAM) has recently been demon- strated in both the classical and quantum regimes. Since there is no fundamental limit to how many quanta of OAM a single photon can carry, optical states with an arbitrarily high difference in this quantum number may, in principle, be entangled. This opens the door to investigations into high-dimensional entanglement shared between states in superpositions of nonzero OAM. Additionally, making use of non-zero OAM states can allow for a dramatic increase in the amount of information carried by a single photon, thus increasing the information capacity of a communication channel. In practice, however, it is difficult to differentiate between states with high OAM numbers with high precision. Here we investigate the ability of deep neural networks to differentiate between states that contain large values of OAM. We show that such networks may be used to differentiate be- tween nearby OAM states that contain realistic amounts of noise, with OAM values of up to 100. Additionally, we examine how the classification accuracy scales with the signal-to-noise ratio of images that are used to train the network, as well as those being tested. Finally, we demonstrate the simultaneous classification of < 100 OAM states with greater than 70 % accuracy. We intend to verify our system with experimentally-produced classi- cal OAM states, as well as investigate possibilities that would allow this technique to work in the few-photon quantum regime.
Fast and precise dense grid size measurement method based on coaxial dual optical imaging system
NASA Astrophysics Data System (ADS)
Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei
2015-10-01
Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers
NASA Technical Reports Server (NTRS)
Schiff, Conrad
2006-01-01
This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although the process model ("prop-burn-prop") that was studied is very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.
NASA Astrophysics Data System (ADS)
Boscheri, Walter; Dumbser, Michael
2014-10-01
In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with stiff relaxation source terms.
Development of machine learning models for diagnosis of glaucoma.
Kim, Seong Jae; Cho, Kyong Jin; Oh, Sejong
2017-01-01
The study aimed to develop machine learning models that have strong prediction power and interpretability for diagnosis of glaucoma based on retinal nerve fiber layer (RNFL) thickness and visual field (VF). We collected various candidate features from the examination of retinal nerve fiber layer (RNFL) thickness and visual field (VF). We also developed synthesized features from original features. We then selected the best features proper for classification (diagnosis) through feature evaluation. We used 100 cases of data as a test dataset and 399 cases of data as a training and validation dataset. To develop the glaucoma prediction model, we considered four machine learning algorithms: C5.0, random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). We repeatedly composed a learning model using the training dataset and evaluated it by using the validation dataset. Finally, we got the best learning model that produces the highest validation accuracy. We analyzed quality of the models using several measures. The random forest model shows best performance and C5.0, SVM, and KNN models show similar accuracy. In the random forest model, the classification accuracy is 0.98, sensitivity is 0.983, specificity is 0.975, and AUC is 0.979. The developed prediction models show high accuracy, sensitivity, specificity, and AUC in classifying among glaucoma and healthy eyes. It will be used for predicting glaucoma against unknown examination records. Clinicians may reference the prediction results and be able to make better decisions. We may combine multiple learning models to increase prediction accuracy. The C5.0 model includes decision rules for prediction. It can be used to explain the reasons for specific predictions.
Fencl, Pavel; Belohlavek, Otakar; Harustiak, Tomas; Zemanova, Milada
2016-11-01
The aim of the analysis was to assess the accuracy of various FDG-PET/CT parameters in staging lymph nodes after neoadjuvant chemotherapy. In this prospective study, 74 patients with adenocarcinoma of the esophageal-gastric junction were examined by FDG-PET/CT in the course of their neoadjuvant chemotherapy given before surgical treatment. Data from the final FDG-PET/CT examinations were compared with the histology from the surgical specimens (gold standard). The accuracy was calculated for four FDG-PET/CT parameters: (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes. In 74 patients, a total of 1540 lymph nodes were obtained by surgery, and these were grouped into 287 regions according to topographic origin. Five hundred and two nodes were imaged by FDG-PET/CT and were grouped into these same regions for comparison. In the analysis, (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes identified metastases in particular regions with sensitivities of 11.6%, 2.9%, 21.7%, and 13.0%, respectively; specificity was 98.6%, 94.5%, 74.8%, and 93.6%, respectively. The best accuracy of 77.7% reached the parameter of hypermetabolic nodes. Accuracy decreased to 62.0% when also smaller nodes (medium-large) were taken for the parameter of metastases. FDG-PET/CT proved low sensitivity and high specificity. Low sensitivity was based on low detection rate (32.6%) when compared nodes imaged by FDG-PET/CT to nodes found by surgery, and in inability to detect micrometastases. Sensitivity increased when also medium-large LNs were taken for positive, but specificity and accuracy decreased.
NASA Astrophysics Data System (ADS)
Sukawattanavijit, Chanika; Srestasathiern, Panu
2017-10-01
Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-06-17
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Observed bulk properties of the Mars moon Phobos
NASA Astrophysics Data System (ADS)
Pätzold, M.; Andert, T. P.; Jacobson, R.; Rosenblatt, P.; Dehant, V.
2013-09-01
The mass of the Mars moon Phobos has been determined by spacecraft close flybys, by solving for the Martian gravity field and by the analysis of secular orbit perturbations. The absolute value and accuracy is sensitive on the actuality of the Phobos ephemeris, the accuracy of the spacecraft orbit, other perturbing forces acting on the spacecraft and the resolution of the Martian gravity field besides the measurement accuracy of the radio tracking data. The mass value and its error improved from spacecraft mission to mission or from the modern analysis of "old" tracking data but none of these values can claim to be the final truth. The mass value seems to settle within the range of GMPh = (7.11 +/- 0.09)·10-4 km3s-2 (3s) which covers almost all mass values from close flybys and "distant" encounters. Using the volume value determined from MEX HRSC imaging, the bulk density is (1873 +/- 31) kg/m3, a low value which suggests that Phobos is either highly porous, is composed partially of light material or both. In view of theories of the Phobos' origin, one possibility is that Phobos is not a captured asteroid but accreted from a debris disk in Mars orbit as a second generation solar system object.
NASA Astrophysics Data System (ADS)
Romero, P.; Pablos, B.; Barderas, G.
2017-07-01
Areostationary satellites are considered a high interest group of satellites to satisfy the telecommunications needs of the foreseen missions to Mars. An areostationary satellite, in an areoequatorial circular orbit with a period of 1 Martian sidereal day, would orbit Mars remaining at a fixed location over the Martian surface, analogous to a geostationary satellite around the Earth. This work addresses an analysis of the perturbed orbital motion of an areostationary satellite as well as a preliminary analysis of the aerostationary orbit estimation accuracy based on Earth tracking observations. First, the models for the perturbations due to the Mars gravitational field, the gravitational attraction of the Sun and the Martian moons, Phobos and Deimos, and solar radiation pressure are described. Then, the observability from Earth including possible occultations by Mars of an areostationary satellite in a perturbed areosynchronous motion is analyzed. The results show that continuous Earth-based tracking is achievable using observations from the three NASA Deep Space Network Complexes in Madrid, Goldstone and Canberra in an occultation-free scenario. Finally, an analysis of the orbit determination accuracy is addressed considering several scenarios including discontinuous tracking schedules for different epochs and different areoestationary satellites. Simulations also allow to quantify the aerostationary orbit estimation accuracy for various tracking series durations and observed orbit arc-lengths.
NASA Astrophysics Data System (ADS)
Huang, Keke; Li, Ming; Li, Hongmei; Li, Mengwan; Jiang, You; Fang, Xiang
2016-01-01
Ambient ionization (AI) techniques have been widely used in chemistry, medicine, material science, environmental science, forensic science. AI takes advantage of direct desorption/ionization of chemicals in raw samples under ambient environmental conditions with minimal or no sample preparation. However, its quantitative accuracy is restricted by matrix effects during the ionization process. To improve the quantitative accuracy of AI, a matrix reference material, which is a particular form of measurement standard, was coupled to an AI technique in this study. Consequently the analyte concentration in a complex matrix can be easily quantified with high accuracy. As a demonstration, this novel method was applied for the accurate quantification of creatinine in serum by using extractive electrospray ionization (EESI) mass spectrometry. Over the concentration range investigated (0.166 ~ 1.617 μg/mL), a calibration curve was obtained with a satisfactory linearity (R2 = 0.994), and acceptable relative standard deviations (RSD) of 4.6 ~ 8.0% (n = 6). Finally, the creatinine concentration value of a serum sample was determined to be 36.18 ± 1.08 μg/mL, which is in excellent agreement with the certified value of 35.16 ± 0.39 μg/mL.
Prediction of adult height by Tanner-Whitehouse method in young Caucasian male athletes.
Ostojic, S M
2013-04-01
Although the accuracy of final height prediction using skeletal age development has been confirmed in many studies for children treated for congenital primary hypothyroidism, short normal children, constitutionally tall children, no studies compared the predicted adult height at young age with final stature in athletic population. In this study, the intention was to investigate to what extent the Tanner-Whitehouse (TW) method is adequate for prediction of final stature in young Caucasian male athletes. Prospective observational study. Plain radiographs of the left hand and wrist were obtained from 477 athletic children (ranging in age from 8.0 to 17.9 years) who came to the outpatient clinic between 2000 and 2011 for adult height estimation, with no orthopedic trauma suspected. Adult height was estimated using bone age rates according to TW method. Height was measured both at baseline and follow-up (at the age of 19 years). No significant difference was found between the estimated adult height (184.9 ± 9.7 cm) and final stature (185.6 ± 9.6 cm) [95% confidence interval (CI) 1.61-3.01, P = 0.55]. The relationship between estimated and final adult height was high (r = 0.96). Bland-Altman analysis confirmed that the 95% of differences between estimated adult height and final stature lie between limits of agreement (mean ± 2 SD) (-5.84 and 4.52 cm). TW method is an accurate method of predicting adult height in male normal-growing athletic boys.
Neville, R S; Stonham, T J; Glover, R J
2000-01-01
In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).
Prediction of beef carcass and meat traits from rearing factors in young bulls and cull cows.
Soulat, J; Picard, B; Léger, S; Monteils, V
2016-04-01
The aim of this study was to predict the beef carcass and LM (thoracis part) characteristics and the sensory properties of the LM from rearing factors applied during the fattening period. Individual data from 995 animals (688 young bulls and 307 cull cows) in 15 experiments were used to establish prediction models. The data concerned rearing factors (13 variables), carcass characteristics (5 variables), LM characteristics (2 variables), and LM sensory properties (3 variables). In this study, 8 prediction models were established: dressing percentage and the proportions of fat tissue and muscle in the carcass to characterize the beef carcass; cross-sectional area of fibers (mean fiber area) and isocitrate dehydrogenase activity to characterize the LM; and, finally, overall tenderness, juiciness, and flavor intensity scores to characterize the LM sensory properties. A random effect was considered in each model: the breed for the prediction models for the carcass and LM characteristics and the trained taste panel for the prediction of the meat sensory properties. To evaluate the quality of prediction models, 3 criteria were measured: robustness, accuracy, and precision. The model was robust when the root mean square errors of prediction of calibration and validation sub-data sets were near to one another. Except for the mean fiber area model, the obtained predicted models were robust. The prediction models were considered to have a high accuracy when the mean prediction error (MPE) was ≤0.10 and to have a high precision when the was the closest to 1. The prediction of the characteristics of the carcass from the rearing factors had a high precision ( > 0.70) and a high prediction accuracy (MPE < 0.10), except for the fat percentage model ( = 0.67, MPE = 0.16). However, the predictions of the LM characteristics and LM sensory properties from the rearing factors were not sufficiently precise ( < 0.50) and accurate (MPE > 0.10). Only the flavor intensity of the beef score could be satisfactorily predicted from the rearing factors with high precision ( = 0.72) and accuracy (MPE = 0.10). All the prediction models displayed different effects of the rearing factors according to animal categories (young bulls or cull cows). In consequence, these prediction models display the necessary adaption of rearing factors during the fattening period according to animal categories to optimize the carcass traits according to animal categories.
NASA Astrophysics Data System (ADS)
Oniga, E.; Chirilă, C.; Stătescu, F.
2017-02-01
Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of "Hydrotechnical Engineering, Geodesy and Environmental Engineering" of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner software, resulting a CAD (Computer Aided Design) model. The results showed the high potential of using low-cost UASs for building 3D model creation and if the building 3D model is created based on its characteristic points the accuracy is significantly improved.
Impacts of Satellite Orbit and Clock on Real-Time GPS Point and Relative Positioning.
Shi, Junbo; Wang, Gaojing; Han, Xianquan; Guo, Jiming
2017-06-12
Satellite orbit and clock corrections are always treated as known quantities in GPS positioning models. Therefore, any error in the satellite orbit and clock products will probably cause significant consequences for GPS positioning, especially for real-time applications. Currently three types of satellite products have been made available for real-time positioning, including the broadcast ephemeris, the International GNSS Service (IGS) predicted ultra-rapid product, and the real-time product. In this study, these three predicted/real-time satellite orbit and clock products are first evaluated with respect to the post-mission IGS final product, which demonstrates cm to m level orbit accuracies and sub-ns to ns level clock accuracies. Impacts of real-time satellite orbit and clock products on GPS point and relative positioning are then investigated using the P3 and GAMIT software packages, respectively. Numerical results show that the real-time satellite clock corrections affect the point positioning more significantly than the orbit corrections. On the contrary, only the real-time orbit corrections impact the relative positioning. Compared with the positioning solution using the IGS final product with the nominal orbit accuracy of ~2.5 cm, the real-time broadcast ephemeris with ~2 m orbit accuracy provided <2 cm relative positioning error for baselines no longer than 216 km. As for the baselines ranging from 574 to 2982 km, the cm-dm level positioning error was identified for the relative positioning solution using the broadcast ephemeris. The real-time product could result in <5 mm relative positioning accuracy for baselines within 2982 km, slightly better than the predicted ultra-rapid product.
Influence of a high vacuum on the precise positioning using an ultrasonic linear motor.
Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu
2011-01-01
This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.
Continuous Glucose Monitoring and Trend Accuracy
Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey
2014-01-01
Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437
ADRC for spacecraft attitude and position synchronization in libration point orbits
NASA Astrophysics Data System (ADS)
Gao, Chen; Yuan, Jianping; Zhao, Yakun
2018-04-01
This paper addresses the problem of spacecraft attitude and position synchronization in libration point orbits between a leader and a follower. Using dual quaternion, the dimensionless relative coupled dynamical model is derived considering computation efficiency and accuracy. Then a model-independent dimensionless cascade pose-feedback active disturbance rejection controller is designed to spacecraft attitude and position tracking control problems considering parameter uncertainties and external disturbances. Numerical simulations for the final approach phase in spacecraft rendezvous and docking and formation flying are done, and the results show high-precision tracking errors and satisfactory convergent rates under bounded control torque and force which validate the proposed approach.
Integrating a Hypernymic Proposition Interpreter into a Semantic Processor for Biomedical Texts
Fiszman, Marcelo; Rindflesch, Thomas C.; Kilicoglu, Halil
2003-01-01
Semantic processing provides the potential for producing high quality results in natural language processing (NLP) applications in the biomedical domain. In this paper, we address a specific semantic phenomenon, the hypernymic proposition, and concentrate on integrating the interpretation of such predications into a more general semantic processor in order to improve overall accuracy. A preliminary evaluation assesses the contribution of hypernymic propositions in providing more specific semantic predications and thus improving effectiveness in retrieving treatment propositions in MEDLINE abstracts. Finally, we discuss the generalization of this methodology to additional semantic propositions as well as other types of biomedical texts. PMID:14728170
Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei
2010-01-01
This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.
NASA Astrophysics Data System (ADS)
Honarmand, M.; Moradi, M.
2018-06-01
In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt
2017-09-01
A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.
A novel adaptive finite time controller for bilateral teleoperation system
NASA Astrophysics Data System (ADS)
Wang, Ziwei; Chen, Zhang; Liang, Bin; Zhang, Bo
2018-03-01
Most bilateral teleoperation researches focus on the system stability within time-delays. However, practical teleoperation tasks require high performances besides system stability, such as convergence rate and accuracy. This paper investigates bilateral teleoperation controller design with transient performances. To ensure the transient performances and system stability simultaneously, an adaptive non-singular fast terminal mode controller is proposed to achieve practical finite-time stability considering system uncertainties and time delays. In addition, a novel switching scheme is introduced, in which way the singularity problem of conventional terminal sliding manifold is avoided. Finally, numerical simulations demonstrate the effectiveness and validity of the proposed method.
NASA Astrophysics Data System (ADS)
Liu, W. L.; Li, Y. W.
2017-09-01
Large-scale dimensional metrology usually requires a combination of multiple measurement systems, such as laser tracking, total station, laser scanning, coordinate measuring arm and video photogrammetry, etc. Often, the results from different measurement systems must be combined to provide useful results. The coordinate transformation is used to unify coordinate frames in combination; however, coordinate transformation uncertainties directly affect the accuracy of the final measurement results. In this paper, a novel method is proposed for improving the accuracy of coordinate transformation, combining the advantages of the best-fit least-square and radial basis function (RBF) neural networks. First of all, the configuration of coordinate transformation is introduced and a transformation matrix containing seven variables is obtained. Second, the 3D uncertainty of the transformation model and the residual error variable vector are established based on the best-fit least-square. Finally, in order to optimize the uncertainty of the developed seven-variable transformation model, we used the RBF neural network to identify the uncertainty of the dynamic, and unstructured, owing to its great ability to approximate any nonlinear function to the designed accuracy. Intensive experimental studies were conducted to check the validity of the theoretical results. The results show that the mean error of coordinate transformation decreased from 0.078 mm to 0.054 mm after using this method in contrast with the GUM method.
CSE database: extended annotations and new recommendations for ECG software testing.
Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie
2017-08-01
Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
Relative Density Anomalies Below 200 km as Observed by Aerodynamic Drag on Orbiting Rocket Bodies
NASA Astrophysics Data System (ADS)
Pilinski, M.; Argrow, B.; Palo, S. E.
2011-12-01
We examine the geomagnetic latitude and local solar time dependence of density anomalies as observed by rocket bodies in highly eccentric orbits. Density anomalies are estimated by analyzing the fitted ballistic coefficients produced by the Air Force Space Command's High Accuracy Satellite Drag Model. Particularly, observations of rocket bodies with very low perigee altitudes allow for the examination of density anomalies between 105 km and 200 km altitudes. We evaluate the ability to extract coherent geophysical signals from this data set. Finally, a statistical comparison is made between the low altitude density anomalies and those observed by the CHAMP and GRACE satellites above 300 km. In particular, we search for density enhancements which may be associated with the dayside cusp region.
Photogrammetry experiments with a model eye.
Rosenthal, A R; Falconer, D G; Pieper, I
1980-01-01
Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139
Yoshikawa, Masayuki; Yasuhara, Ryo; Ohta, Koichi; Chikatsu, Masayuki; Shima, Yoriko; Kohagura, Junko; Sakamoto, Mizuki; Nakashima, Yousuke; Imai, Tsuyoshi; Ichimura, Makoto; Yamada, Ichihiro; Funaba, Hisamichi; Minami, Takashi
2016-11-01
High time resolved electron temperature measurements are useful for fluctuation study. A multi-pass Thomson scattering (MPTS) system is proposed for the improvement of both increasing the TS signal intensity and time resolution. The MPTS system in GAMMA 10/PDX has been constructed for enhancing the Thomson scattered signals for the improvement of measurement accuracy. The MPTS system has a polarization-based configuration with an image relaying system. We optimized the image relaying optics for improving the multi-pass laser confinement and obtaining the stable MPTS signals over ten passing TS signals. The integrated MPTS signals increased about five times larger than that in the single pass system. Finally, time dependent electron temperatures were obtained in MHz sampling.
Fast and accurate face recognition based on image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Blasch, Erik
2017-05-01
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Accurate and Standardized Coronary Wave Intensity Analysis.
Rivolo, Simone; Patterson, Tiffany; Asrress, Kaleab N; Marber, Michael; Redwood, Simon; Smith, Nicolas P; Lee, Jack
2017-05-01
Coronary wave intensity analysis (cWIA) has increasingly been applied in the clinical research setting to distinguish between the proximal and distal mechanical influences on coronary blood flow. Recently, a cWIA-derived clinical index demonstrated prognostic value in predicting functional recovery postmyocardial infarction. Nevertheless, the known operator dependence of the cWIA metrics currently hampers its routine application in clinical practice. Specifically, it was recently demonstrated that the cWIA metrics are highly dependent on the chosen Savitzky-Golay filter parameters used to smooth the acquired traces. Therefore, a novel method to make cWIA standardized and automatic was proposed and evaluated in vivo. The novel approach combines an adaptive Savitzky-Golay filter with high-order central finite differencing after ensemble-averaging the acquired waveforms. Its accuracy was assessed using in vivo human data. The proposed approach was then modified to automatically perform beat wise cWIA. Finally, the feasibility (accuracy and robustness) of the method was evaluated. The automatic cWIA algorithm provided satisfactory accuracy under a wide range of noise scenarios (≤10% and ≤20% error in the estimation of wave areas and peaks, respectively). These results were confirmed when beat-by-beat cWIA was performed. An accurate, standardized, and automated cWIA was developed. Moreover, the feasibility of beat wise cWIA was demonstrated for the first time. The proposed algorithm provides practitioners with a standardized technique that could broaden the application of cWIA in the clinical practice as enabling multicenter trials. Furthermore, the demonstrated potential of beatwise cWIA opens the possibility investigating the coronary physiology in real time.
Accuracy and consensus in judgments of trustworthiness from faces: behavioral and neural correlates.
Rule, Nicholas O; Krendl, Anne C; Ivcevic, Zorana; Ambady, Nalini
2013-03-01
Perceivers' inferences about individuals based on their faces often show high interrater consensus and can even accurately predict behavior in some domains. Here we investigated the consensus and accuracy of judgments of trustworthiness. In Study 1, we showed that the type of photo judged makes a significant difference for whether an individual is judged as trustworthy. In Study 2, we found that inferences of trustworthiness made from the faces of corporate criminals did not differ from inferences made from the faces of noncriminal executives. In Study 3, we found that judgments of trustworthiness did not differ between the faces of military criminals and the faces of military heroes. In Study 4, we tempted undergraduates to cheat on a test. Although we found that judgments of intelligence from the students' faces were related to students' scores on the test and that judgments of students' extraversion were correlated with self-reported extraversion, there was no relationship between judgments of trustworthiness from the students' faces and students' cheating behavior. Finally, in Study 5, we examined the neural correlates of the accuracy of judgments of trustworthiness from faces. Replicating previous research, we found that perceptions of trustworthiness from the faces in Study 4 corresponded to participants' amygdala response. However, we found no relationship between the amygdala response and the targets' actual cheating behavior. These data suggest that judgments of trustworthiness may not be accurate but, rather, reflect subjective impressions for which people show high agreement. PsycINFO Database Record (c) 2013 APA, all rights reserved
Gesteme-free context-aware adaptation of robot behavior in human-robot cooperation.
Nessi, Federico; Beretta, Elisa; Gatti, Cecilia; Ferrigno, Giancarlo; De Momi, Elena
2016-11-01
Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 näive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of ∼450ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy ∼90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). The provided system is able to dynamic assist the operator during cooperation in the presented scenario. Copyright © 2016 Elsevier B.V. All rights reserved.
Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory
NASA Technical Reports Server (NTRS)
Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.
2013-01-01
Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.
Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
2013-01-01
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
Localization of single biological molecules out of the focal plane
NASA Astrophysics Data System (ADS)
Gardini, L.; Capitanio, M.; Pavone, F. S.
2014-03-01
Since the behaviour of proteins and biological molecules is tightly related to the cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). We developed an apparatus that combines different microscopy techniques to satisfy all the technical requirements for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy. To account for the optical sectioning of thick samples we built up a HILO (Highly Inclined and Laminated Optical sheet) microscopy system through which we can excite the sample in a widefield (WF) configuration by a thin sheet of light that can follow the molecule up and down along the z axis spanning the entire thickness of the cell with a SNR much higher than traditional WF microscopy. Since protein dynamics inside a cell involve all three dimensions, we included a method to measure the x, y, and z coordinates with nanometer accuracy, exploiting the properties of the point-spread-function of out-of-focus quantum dots bound to the protein of interest. Finally, a feedback system stabilizes the microscope from thermal drifts, assuring accurate localization during the entire duration of the experiment.
Effect of Heart rate on Basketball Three-Point Shot Accuracy
Ardigò, Luca P.; Kuvacic, Goran; Iacono, Antonio D.; Dascanio, Giacomo; Padulo, Johnny
2018-01-01
The three-point shot (3S) is a fundamental basketball skill used frequently during a game, and is often a main determinant of the final result. The aim of the study was to investigate the effect of different metabolic conditions, in terms of heart rates, on 3S accuracy (3S%) in 24 male (Under 17) basketball players (age 16.3 ± 0.6 yrs). 3S performance was specifically investigated at different heart rates. All sessions consisted of 10 consecutive 3Ss from five different significant field spots just beyond the FIBA three-point line, i.e., about 7 m from the basket (two counter-clockwise “laps”) at different heart rates: rest (0HR), after warm-up (50%HRMAX [50HR]), and heart rate corresponding to 80% of its maximum value (80%HRMAX [80HR]). We found that 50HR does not significantly decrease 3S% (−15%, P = 0.255), while 80HR significantly does when compared to 0HR (−28%, P = 0.007). Given that 50HR does not decrease 3S% compared to 0HR, we believe that no preliminary warm-up is needed before entering a game in order to specifically achieve a high 3S%. Furthermore, 3S training should be performed in conditions of moderate-to-high fatigued state so that a high 3S% can be maintained during game-play. PMID:29467676
High-resolution tree canopy mapping for New York City using LIDAR and object-based image analysis
NASA Astrophysics Data System (ADS)
MacFaden, Sean W.; O'Neil-Dunne, Jarlath P. M.; Royar, Anna R.; Lu, Jacqueline W. T.; Rundle, Andrew G.
2012-01-01
Urban tree canopy is widely believed to have myriad environmental, social, and human-health benefits, but a lack of precise canopy estimates has hindered quantification of these benefits in many municipalities. This problem was addressed for New York City using object-based image analysis (OBIA) to develop a comprehensive land-cover map, including tree canopy to the scale of individual trees. Mapping was performed using a rule-based expert system that relied primarily on high-resolution LIDAR, specifically its capacity for evaluating the height and texture of aboveground features. Multispectral imagery was also used, but shadowing and varying temporal conditions limited its utility. Contextual analysis was a key part of classification, distinguishing trees according to their physical and spectral properties as well as their relationships to adjacent, nonvegetated features. The automated product was extensively reviewed and edited via manual interpretation, and overall per-pixel accuracy of the final map was 96%. Although manual editing had only a marginal effect on accuracy despite requiring a majority of project effort, it maximized aesthetic quality and ensured the capture of small, isolated trees. Converting high-resolution LIDAR and imagery into usable information is a nontrivial exercise, requiring significant processing time and labor, but an expert system-based combination of OBIA and manual review was an effective method for fine-scale canopy mapping in a complex urban environment.
High Frequency SSVEP-BCI With Hardware Stimuli Control and Phase-Synchronized Comb Filter.
Chabuda, Anna; Durka, Piotr; Zygierewicz, Jaroslaw
2018-02-01
We present an efficient implementation of brain-computer interface (BCI) based on high-frequency steady state visually evoked potentials (SSVEP). Individual shape of the SSVEP response is extracted by means of a feedforward comb filter, which adds delayed versions of the signal to itself. Rendering of the stimuli is controlled by specialized hardware (BCI Appliance). Out of 15 participants of the study, nine were able to produce stable response in at least eight out of ten frequencies from the 30-39 Hz range. They achieved on average 96±4% accuracy and 47±5 bit/min information transfer rate (ITR) for an optimized simple seven-letter speller, while generic full-alphabet speller allowed in this group for 89±9% accuracy and 36±9 bit/min ITR. These values exceed the performances of high-frequency SSVEP-BCI systems reported to date. Classical approach to SSVEP parameterization by relative spectral power in the frequencies of stimulation, implemented on the same data, resulted in significantly lower performance. This suggests that specific shape of the response is an important feature in classification. Finally, we discuss the differences in SSVEP responses of the participants who were able or unable to use the interface, as well as the statistically significant influence of the layout of the speller on the speed of BCI operation.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Y; Chen, H; Chen, J
2016-06-15
Purpose: To design and construct a three-dimensional (3D) anthropopathic abdominal phantom for evaluating deformable image registration (DIR) accuracy on images and dose deformation in adaptive radiation therapy (ART). Method: Organ moulds, including liver, kidney, spleen, stomach, vertebra and two metastasis tumors, are 3D printed using the contours from an ovarian cancer patient. The organ moulds are molded with deformable gels that made of different mixtures of polyvinyl chloride (PVC) and the softener dioctyl terephthalate. Gels with different densities are obtained by a polynomial fitting curve which describes the relation between the CT number and PVC-softener blending ratio. The rigid vertebrasmore » are constructed by moulding with white cement. The final abdominal phantom is assembled by arranging all the fabricated organs inside a hollow dummy according to their anatomies and sealed with deformable gel with averaged CT number of muscle and fat. Geometric and dosimetric landmarks are embedded inside the phantom for spatial accuracy and dose accumulation accuracy studies. Three DIR algorithms available in the open source DIR toolkit-DIRART, including the Demons, the Horn-Schunck and Lucas-Kanade method and the Level-Set Motion method, are tested using the constructed phantom. Results: Viscoelastic behavior is observed in the constructed deformable gel, which serves as an ideal material for the deformable phantom. The constructed abdominal phantom consists of highly realistic anatomy and the fabricated organs inside have close CT number to its reference patient. DIR accuracy studies conducted on the constructed phantom using three DIR approaches indicate that geometric accuracy of a DIR algorithm has achieved does not guarantee accuracy in dose accumulation. Conclusions: We have designed and constructed an anthropopathic abdominal deformable phantom with satisfactory elastic property, realistic organ density and anatomy. This physical phantom is recyclable and can be used for routine validations of DIR geometric accuracy and dose accumulation accuracy in ART. This work is supported in part by grant from VARIAN MEDICAL SYSTEMS INC, the National Natural Science Foundation of China (no 81428019 and no 81301940), the Guangdong Natural Science Foundation (2015A030313302) and the 2015 Pearl River S&T Nova Program of Guangzhou (201506010096).« less
Pencil-beam redefinition algorithm dose calculations for electron therapy treatment planning
NASA Astrophysics Data System (ADS)
Boyd, Robert Arthur
2001-08-01
The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Chen, Charles; Porth, Ilga; El-Kassaby, Yousry A
2015-05-09
Genomic selection (GS) in forestry can substantially reduce the length of breeding cycle and increase gain per unit time through early selection and greater selection intensity, particularly for traits of low heritability and late expression. Affordable next-generation sequencing technologies made it possible to genotype large numbers of trees at a reasonable cost. Genotyping-by-sequencing was used to genotype 1,126 Interior spruce trees representing 25 open-pollinated families planted over three sites in British Columbia, Canada. Four imputation algorithms were compared (mean value (MI), singular value decomposition (SVD), expectation maximization (EM), and a newly derived, family-based k-nearest neighbor (kNN-Fam)). Trees were phenotyped for several yield and wood attributes. Single- and multi-site GS prediction models were developed using the Ridge Regression Best Linear Unbiased Predictor (RR-BLUP) and the Generalized Ridge Regression (GRR) to test different assumption about trait architecture. Finally, using PCA, multi-trait GS prediction models were developed. The EM and kNN-Fam imputation methods were superior for 30 and 60% missing data, respectively. The RR-BLUP GS prediction model produced better accuracies than the GRR indicating that the genetic architecture for these traits is complex. GS prediction accuracies for multi-site were high and better than those of single-sites while multi-site predictability produced the lowest accuracies reflecting type-b genetic correlations and deemed unreliable. The incorporation of genomic information in quantitative genetics analyses produced more realistic heritability estimates as half-sib pedigree tended to inflate the additive genetic variance and subsequently both heritability and gain estimates. Principle component scores as representatives of multi-trait GS prediction models produced surprising results where negatively correlated traits could be concurrently selected for using PCA2 and PCA3. The application of GS to open-pollinated family testing, the simplest form of tree improvement evaluation methods, was proven to be effective. Prediction accuracies obtained for all traits greatly support the integration of GS in tree breeding. While the within-site GS prediction accuracies were high, the results clearly indicate that single-site GS models ability to predict other sites are unreliable supporting the utilization of multi-site approach. Principle component scores provided an opportunity for the concurrent selection of traits with different phenotypic optima.
19 CFR 207.68 - Final comments on information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... comment may address the accuracy, reliability, or probative value of such information by reference to... INVESTIGATIONS OF WHETHER INJURY TO DOMESTIC INDUSTRIES RESULTS FROM IMPORTS SOLD AT LESS THAN FAIR VALUE OR FROM...
Highway noise study : final report.
DOT National Transportation Integrated Search
1974-05-01
Three noise level measurement systems have been investigated and made operational. They are: : 1. Graphic Level Recording Method : 2. Periodic Sampling Method : 3. Environmental Noise Classifier Method : The relative accuracy of the three systems wer...
NASA Astrophysics Data System (ADS)
Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq
2018-01-01
3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-01-01
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%. PMID:28273845
Liu, Chenglong; Liu, Jinghong; Song, Yueming; Liang, Huaidan
2017-03-04
This paper provides a system and method for correction of relative angular displacements between an Unmanned Aerial Vehicle (UAV) and its onboard strap-down photoelectric platform to improve localization accuracy. Because the angular displacements have an influence on the final accuracy, by attaching a measuring system to the platform, the texture image of platform base bulkhead can be collected in a real-time manner. Through the image registration, the displacement vector of the platform relative to its bulkhead can be calculated to further determine angular displacements. After being decomposed and superposed on the three attitude angles of the UAV, the angular displacements can reduce the coordinate transformation errors and thus improve the localization accuracy. Even a simple kind of method can improve the localization accuracy by 14.3%.
Remote sensing of on-road vehicle emissions: Mechanism, applications and a case study from Hong Kong
NASA Astrophysics Data System (ADS)
Huang, Yuhan; Organ, Bruce; Zhou, John L.; Surawski, Nic C.; Hong, Guang; Chan, Edward F. C.; Yam, Yat Shing
2018-06-01
Vehicle emissions are a major contributor to air pollution in cities and have serious health impacts to their inhabitants. On-road remote sensing is an effective and economic tool to monitor and control vehicle emissions. In this review, the mechanism, accuracy, advantages and limitations of remote sensing were introduced. Then the applications and major findings of remote sensing were critically reviewed. It was revealed that the emission distribution of on-road vehicles was highly skewed so that the dirtiest 10% vehicles accounted for over half of the total fleet emissions. Such findings highlighted the importance and effectiveness of using remote sensing for in situ identification of high-emitting vehicles for further inspection and maintenance programs. However, the accuracy and number of vehicles affected by screening programs were greatly dependent on the screening criteria. Remote sensing studies showed that the emissions of gasoline and diesel vehicles were significantly reduced in recent years, with the exception of NOx emissions of diesel vehicles in spite of greatly tightened automotive emission regulations. Thirdly, the experience and issues of using remote sensing for identifying high-emitting vehicles in Hong Kong (where remote sensing is a legislative instrument for enforcement purposes) were reported. That was followed by the first time ever identification and discussion of the issue of frequent false detection of diesel high-emitters using remote sensing. Finally, the challenges and future research directions of on-road remote sensing were elaborated.
NASA Astrophysics Data System (ADS)
Han, Xiaopeng; Huang, Xin; Li, Jiayi; Li, Yansheng; Yang, Michael Ying; Gong, Jianya
2018-04-01
In recent years, the availability of high-resolution imagery has enabled more detailed observation of the Earth. However, it is imperative to simultaneously achieve accurate interpretation and preserve the spatial details for the classification of such high-resolution data. To this aim, we propose the edge-preservation multi-classifier relearning framework (EMRF). This multi-classifier framework is made up of support vector machine (SVM), random forest (RF), and sparse multinomial logistic regression via variable splitting and augmented Lagrangian (LORSAL) classifiers, considering their complementary characteristics. To better characterize complex scenes of remote sensing images, relearning based on landscape metrics is proposed, which iteratively quantizes both the landscape composition and spatial configuration by the use of the initial classification results. In addition, a novel tri-training strategy is proposed to solve the over-smoothing effect of relearning by means of automatic selection of training samples with low classification certainties, which always distribute in or near the edge areas. Finally, EMRF flexibly combines the strengths of relearning and tri-training via the classification certainties calculated by the probabilistic output of the respective classifiers. It should be noted that, in order to achieve an unbiased evaluation, we assessed the classification accuracy of the proposed framework using both edge and non-edge test samples. The experimental results obtained with four multispectral high-resolution images confirm the efficacy of the proposed framework, in terms of both edge and non-edge accuracy.
A Novel Arc Fault Detector for Early Detection of Electrical Fires
Yang, Kai; Zhang, Rencheng; Yang, Jianhong; Liu, Canhua; Chen, Shouhong; Zhang, Fujiang
2016-01-01
Arc faults can produce very high temperatures and can easily ignite combustible materials; thus, they represent one of the most important causes of electrical fires. The application of arc fault detection, as an emerging early fire detection technology, is required by the National Electrical Code to reduce the occurrence of electrical fires. However, the concealment, randomness and diversity of arc faults make them difficult to detect. To improve the accuracy of arc fault detection, a novel arc fault detector (AFD) is developed in this study. First, an experimental arc fault platform is built to study electrical fires. A high-frequency transducer and a current transducer are used to measure typical load signals of arc faults and normal states. After the common features of these signals are studied, high-frequency energy and current variations are extracted as an input eigenvector for use by an arc fault detection algorithm. Then, the detection algorithm based on a weighted least squares support vector machine is designed and successfully applied in a microprocessor. Finally, an AFD is developed. The test results show that the AFD can detect arc faults in a timely manner and interrupt the circuit power supply before electrical fires can occur. The AFD is not influenced by cross talk or transient processes, and the detection accuracy is very high. Hence, the AFD can be installed in low-voltage circuits to monitor circuit states in real-time to facilitate the early detection of electrical fires. PMID:27070618
NASA Technical Reports Server (NTRS)
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail; Sinitsyn, Alexey
2017-04-01
Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.
Final Report: Ionization chemistry of high temperature molecular fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L E
2007-02-26
With the advent of coupled chemical/hydrodynamic reactive flow models for high explosives, understanding detonation chemistry is of increasing importance to DNT. The accuracy of first principles detonation codes, such as CHEETAH, are dependent on an accurate representation of the species present under detonation conditions. Ionic species and non-molecular phases are not currently included coupled chemistry/hydrodynamic simulations. This LDRD will determine the prevalence of such species during high explosive detonations, by carrying out experimental and computational investigation of common detonation products under extreme conditions. We are studying the phase diagram of detonation products such as H{sub 2}O, or NH{sub 3} andmore » mixtures under conditions of extreme pressure (P > 1 GPa) and temperature (T > 1000K). Under these conditions, the neutral molecular form of matter transforms to a phase dominated by ions. The phase boundaries of such a region are unknown.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradel, Lauren; Endert, Alexander; Koch, Kristen
2013-08-01
Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional textual intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the space management strategies of users partitioned by type of tool philosophy followed (visualization- or text-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with information on the display (integrated or independent workspaces). Next,more » we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we offer design suggestions for building future co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays.« less
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
NASA Astrophysics Data System (ADS)
Beck, A.; Frederiksen, J. T.; Dérouillat, J.
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Be-7 as a tracer for short-term soil surface changes - opportunities and limitations
NASA Astrophysics Data System (ADS)
Baumgart, Philipp
2013-04-01
Within the last 20 years the cosmogenic nuclide Beryllium-7 was successfully established as a suitable tracer element to detect soil surface changes with a high accuracy. Particularly soil erosion rates from single precipitation events are in the focus of different studies due to the short radioactive half-life of the Be-7 isotope. High sorption at topmost soil particles and immobility at given pH-values enable fine-scaled erosion modelling down to 2 mm increments. But some important challenging limitations require particular attention, starting from sampling up to the final data evaluation. E.g. these are the realisation of the fine increment soil collection, the limiting amount of measurable samples per campaign due to the short radioactive half-life and the specific requirements for the detector measurements. Both, the high potential and the challenging limitations are presented as well as future perspectives of that tracer method.
Conser, Christiana; Seebacher, Lizbeth; Fujino, David W; Reichard, Sarah; DiTomaso, Joseph M
2015-01-01
Weed Risk Assessment (WRA) methods for evaluating invasiveness in plants have evolved rapidly in the last two decades. Many WRA tools exist, but none were specifically designed to screen ornamental plants prior to being released into the environment. To be accepted as a tool to evaluate ornamental plants for the nursery industry, it is critical that a WRA tool accurately predicts non-invasiveness without falsely categorizing them as invasive. We developed a new Plant Risk Evaluation (PRE) tool for ornamental plants. The 19 questions in the final PRE tool were narrowed down from 56 original questions from existing WRA tools. We evaluated the 56 WRA questions by screening 21 known invasive and 14 known non-invasive ornamental plants. After statistically comparing the predictability of each question and the frequency the question could be answered for both invasive and non-invasive species, we eliminated questions that provided no predictive power, were irrelevant in our current model, or could not be answered reliably at a high enough percentage. We also combined many similar questions. The final 19 remaining PRE questions were further tested for accuracy using 56 additional known invasive plants and 36 known non-invasive ornamental species. The resulting evaluation demonstrated that when "needs further evaluation" classifications were not included, the accuracy of the model was 100% for both predicting invasiveness and non-invasiveness. When "needs further evaluation" classifications were included as either false positive or false negative, the model was still 93% accurate in predicting invasiveness and 97% accurate in predicting non-invasiveness, with an overall accuracy of 95%. We conclude that the PRE tool should not only provide growers with a method to accurately screen their current stock and potential new introductions, but also increase the probability of the tool being accepted for use by the industry as the basis for a nursery certification program.
Conser, Christiana; Seebacher, Lizbeth; Fujino, David W.; Reichard, Sarah; DiTomaso, Joseph M.
2015-01-01
Weed Risk Assessment (WRA) methods for evaluating invasiveness in plants have evolved rapidly in the last two decades. Many WRA tools exist, but none were specifically designed to screen ornamental plants prior to being released into the environment. To be accepted as a tool to evaluate ornamental plants for the nursery industry, it is critical that a WRA tool accurately predicts non-invasiveness without falsely categorizing them as invasive. We developed a new Plant Risk Evaluation (PRE) tool for ornamental plants. The 19 questions in the final PRE tool were narrowed down from 56 original questions from existing WRA tools. We evaluated the 56 WRA questions by screening 21 known invasive and 14 known non-invasive ornamental plants. After statistically comparing the predictability of each question and the frequency the question could be answered for both invasive and non-invasive species, we eliminated questions that provided no predictive power, were irrelevant in our current model, or could not be answered reliably at a high enough percentage. We also combined many similar questions. The final 19 remaining PRE questions were further tested for accuracy using 56 additional known invasive plants and 36 known non-invasive ornamental species. The resulting evaluation demonstrated that when “needs further evaluation” classifications were not included, the accuracy of the model was 100% for both predicting invasiveness and non-invasiveness. When “needs further evaluation” classifications were included as either false positive or false negative, the model was still 93% accurate in predicting invasiveness and 97% accurate in predicting non-invasiveness, with an overall accuracy of 95%. We conclude that the PRE tool should not only provide growers with a method to accurately screen their current stock and potential new introductions, but also increase the probability of the tool being accepted for use by the industry as the basis for a nursery certification program. PMID:25803830
Hoffman, John; Young, Stefano; Noo, Frédéric; McNitt-Gray, Michael
2016-03-01
With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm(2)/g (0.01%). FreeCT_wFBP is a fast, highly configurable reconstruction package for third-generation CT available under the GNU GPL. It shows good performance with both clinical and simulated data.
2015-01-01
Highly charged metal ions act as catalytic centers and structural elements in a broad range of chemical complexes. The nonbonded model for metal ions is extensively used in molecular simulations due to its simple form, computational speed, and transferability. We have proposed and parametrized a 12-6-4 LJ (Lennard-Jones)-type nonbonded model for divalent metal ions in previous work, which showed a marked improvement over the 12-6 LJ nonbonded model. In the present study, by treating the experimental hydration free energies and ion–oxygen distances of the first solvation shell as targets for our parametrization, we evaluated 12-6 LJ parameters for 18 M(III) and 6 M(IV) metal ions for three widely used water models (TIP3P, SPC/E, and TIP4PEW). As expected, the interaction energy underestimation of the 12-6 LJ nonbonded model increases dramatically for the highly charged metal ions. We then parametrized the 12-6-4 LJ-type nonbonded model for these metal ions with the three water models. The final parameters reproduced the target values with good accuracy, which is consistent with our previous experience using this potential. Finally, tests were performed on a protein system, and the obtained results validate the transferability of these nonbonded model parameters. PMID:25145273
NASA Astrophysics Data System (ADS)
Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.
2016-10-01
A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.
Passive athermalization: required accuracy of the thermo-optical coefficients
NASA Astrophysics Data System (ADS)
Rogers, John R.
2014-12-01
Passive athermalization requires that the materials (both optical and mechanical) and optical powers be carefully selected in order for the image to stay adequately in focus at the plane of the detector as the various materials change in physical dimension and refractive index. For a large operational temperature range, the accuracy of the thermo-optical coefficients (dn/dT coefficients and the Coefficients of Thermal Expansion) can limit the performance of the final system. Based on an example lens designed to be passively athermalized over a 200°C temperature range, and using a Monte Carlo analysis technique, we examine the accuracy to which the expansion coefficients and dn/dT coefficients of the system must be known.
A calibration method of infrared LVF based spectroradiometer
NASA Astrophysics Data System (ADS)
Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin
2017-10-01
In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.
Vibration modes interference in the MEMS resonant pressure sensor
NASA Astrophysics Data System (ADS)
Zhang, Fangfang; Li, Anlin; Bu, Zhenxiang; Wang, Lingyun; Sun, Daoheng; Du, Xiaohui; Gu, Dandan
2017-11-01
A new type of coupled balanced-mass double-ended tuning fork resonator (CBDETF) pressure sensor is fabricated and tested. However, the low accuracy of the CBDETF pressure sensor is not satisfied to us. Based on systematic analysis and tests, the coupling effect between the operational mode and interference mode is considered to be the main cause for the sensor in accuracy. To solve this problem, the stiffness of the serpentine beams is increased to pull up the resonant frequency of the interfering mode and make it separate far from the operational mode. Finally, the accuracy of the CBDETF pressure sensor is improved from + /-0.5% to less than + /-0.03% of the Full Scale (F.S.).
Accuracy of binary black hole waveform models for aligned-spin binaries
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Chu, Tony; Fong, Heather; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-05-01
Coalescing binary black holes are among the primary science targets for second generation ground-based gravitational wave detectors. Reliable gravitational waveform models are central to detection of such systems and subsequent parameter estimation. This paper performs a comprehensive analysis of the accuracy of recent waveform models for binary black holes with aligned spins, utilizing a new set of 84 high-accuracy numerical relativity simulations. Our analysis covers comparable mass binaries (mass-ratio 1 ≤q ≤3 ), and samples independently both black hole spins up to a dimensionless spin magnitude of 0.9 for equal-mass binaries and 0.85 for unequal mass binaries. Furthermore, we focus on the high-mass regime (total mass ≳50 M⊙ ). The two most recent waveform models considered (PhenomD and SEOBNRv2) both perform very well for signal detection, losing less than 0.5% of the recoverable signal-to-noise ratio ρ , except that SEOBNRv2's efficiency drops slightly for both black hole spins aligned at large magnitude. For parameter estimation, modeling inaccuracies of the SEOBNRv2 model are found to be smaller than systematic uncertainties for moderately strong GW events up to roughly ρ ≲15 . PhenomD's modeling errors are found to be smaller than SEOBNRv2's, and are generally irrelevant for ρ ≲20 . Both models' accuracy deteriorates with increased mass ratio, and when at least one black hole spin is large and aligned. The SEOBNRv2 model shows a pronounced disagreement with the numerical relativity simulation in the merger phase, for unequal masses and simultaneously both black hole spins very large and aligned. Two older waveform models (PhenomC and SEOBNRv1) are found to be distinctly less accurate than the more recent PhenomD and SEOBNRv2 models. Finally, we quantify the bias expected from all four waveform models during parameter estimation for several recovered binary parameters: chirp mass, mass ratio, and effective spin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guennou, L.; /Northwestern U. /Marseille, Lab. Astrophys.; Adami, C.
2010-08-01
As a contribution to the understanding of the dark energy concept, the Dark energy American French Team (DAFT, in French FADA) has started a large project to characterize statistically high redshift galaxy clusters, infer cosmological constraints from Weak Lensing Tomography, and understand biases relevant for constraining dark energy and cluster physics in future cluster and cosmological experiments. Aims. The purpose of this paper is to establish the basis of reference for the photo-z determination used in all our subsequent papers, including weak lensing tomography studies. This project is based on a sample of 91 high redshift (z {ge} 0.4), massivemore » ({approx}> 3 x 10{sup 14} M{sub {circle_dot}}) clusters with existing HST imaging, for which we are presently performing complementary multi-wavelength imaging. This allows us in particular to estimate spectral types and determine accurate photometric redshifts for galaxies along the lines of sight to the first ten clusters for which all the required data are available down to a limit of I{sub AB} = 24./24.5 with the LePhare software. The accuracy in redshift is of the order of 0.05 for the range 0.2 {le} z {le} 1.5. We verified that the technique applied to obtain photometric redshifts works well by comparing our results to with previous works. In clusters, photo-z accuracy is degraded for bright absolute magnitudes and for the latest and earliest type galaxies. The photo-z accuracy also only slightly varies as a function of the spectral type for field galaxies. As a consequence, we find evidence for an environmental dependence of the photo-z accuracy, interpreted as the standard used Spectral Energy Distributions being not very well suited to cluster galaxies. Finally, we modeled the LCDCS 0504 mass with the strong arcs detected along this line of sight.« less
Parametric boundary reconstruction algorithm for industrial CT metrology application.
Yin, Zhye; Khare, Kedar; De Man, Bruno
2009-01-01
High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.
Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue
2017-01-01
The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
High-speed railway real-time localization auxiliary method based on deep neural network
NASA Astrophysics Data System (ADS)
Chen, Dongjie; Zhang, Wensheng; Yang, Yang
2017-11-01
High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.
Michalet, X.; Siegmund, O.H.W.; Vallerga, J.V.; Jelinsky, P.; Millaud, J.E.; Weiss, S.
2017-01-01
We have recently developed a wide-field photon-counting detector having high-temporal and high-spatial resolutions and capable of high-throughput (the H33D detector). Its design is based on a 25 mm diameter multi-alkali photocathode producing one photo electron per detected photon, which are then multiplied up to 107 times by a 3-microchannel plate stack. The resulting electron cloud is proximity focused on a cross delay line anode, which allows determining the incident photon position with high accuracy. The imaging and fluorescence lifetime measurement performances of the H33D detector installed on a standard epifluorescence microscope will be presented. We compare them to those of standard single-molecule detectors such as single-photon avalanche photodiode (SPAD) or electron-multiplying camera using model samples (fluorescent beads, quantum dots and live cells). Finally, we discuss the design and applications of future generation of H33D detectors for single-molecule imaging and high-throughput study of biomolecular interactions. PMID:29479130
A new processing scheme for ultra-high resolution direct infusion mass spectrometry data
NASA Astrophysics Data System (ADS)
Zielinski, Arthur T.; Kourtchev, Ivan; Bortolini, Claudio; Fuller, Stephen J.; Giorio, Chiara; Popoola, Olalekan A. M.; Bogialli, Sara; Tapparo, Andrea; Jones, Roderic L.; Kalberer, Markus
2018-04-01
High resolution, high accuracy mass spectrometry is widely used to characterise environmental or biological samples with highly complex composition enabling the identification of chemical composition of often unknown compounds. Despite instrumental advancements, the accurate molecular assignment of compounds acquired in high resolution mass spectra remains time consuming and requires automated algorithms, especially for samples covering a wide mass range and large numbers of compounds. A new processing scheme is introduced implementing filtering methods based on element assignment, instrumental error, and blank subtraction. Optional post-processing incorporates common ion selection across replicate measurements and shoulder ion removal. The scheme allows both positive and negative direct infusion electrospray ionisation (ESI) and atmospheric pressure photoionisation (APPI) acquisition with the same programs. An example application to atmospheric organic aerosol samples using an Orbitrap mass spectrometer is reported for both ionisation techniques resulting in final spectra with 0.8% and 8.4% of the peaks retained from the raw spectra for APPI positive and ESI negative acquisition, respectively.
NASA Astrophysics Data System (ADS)
He, Li; Song, Xuan
2018-03-01
In recent years, ceramic fabrication using stereolithography (SLA) has gained in popularity because of its high accuracy and density that can be achieved in the final part of production. One of the key challenges in ceramic SLA is that support structures are required for building overhanging features, whereas removing these support structures without damaging the components is difficult. In this research, a suspension-enclosing projection-stereolithography process is developed to overcome this challenge. This process uses a high-yield-stress ceramic slurry as the feedstock material and exploits the elastic force of the material to support overhanging features without the need for building additional support structures. Ceramic slurries with different solid loadings are studied to identify the rheological properties most suitable for supporting overhanging features. An analytical model of a double doctor-blade module is established to obtain uniform and thin recoating layers from a high-yield-stress slurry. Several test cases highlight the feasibility of using a high-yield-stress slurry to support overhanging features in SLA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, W.
2013-01-01
Final technical progress report of SunShot Incubator Solaflect Energy. The project succeeded in demonstrating that the Solaflect Suspension Heliostat design is viable for large-scale CSP installations. Canting accuracy is acceptable and is continually improving as Solaflect improves its understanding of this design. Cost reduction initiatives were successful, and there are still many opportunities for further development and further cost reduction.
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.