NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
Effect of recent popularity on heat-conduction based recommendation models
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Dong, Qiang; Shi, Yang-Bo; Fu, Yan; He, Jia-Lin
2017-05-01
Accuracy and diversity are two important measures in evaluating the performance of recommender systems. It has been demonstrated that the recommendation model inspired by the heat conduction process has high diversity yet low accuracy. Many variants have been introduced to improve the accuracy while keeping high diversity, most of which regard the current node-degree of an item as its popularity. However in this way, a few outdated items of large degree may be recommended to an enormous number of users. In this paper, we take the recent popularity (recently increased item degrees) into account in the heat-conduction based methods, and propose accordingly the improved recommendation models. Experimental results on two benchmark data sets show that the accuracy can be largely improved while keeping the high diversity compared with the original models.
Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services
ERIC Educational Resources Information Center
Wang, Guoquan
2013-01-01
High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…
High accuracy-nationwide differential global positioning system test and analysis : phase II report
DOT National Transportation Integrated Search
2005-07-01
The High Accuracy-Nationwide Differential Global Positioning System (HA-NDGPS) program focused on the development of compression and broadcast techniques to provide users over a large area wit very accurate radio navigation solutions. The goal was ac...
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope
NASA Astrophysics Data System (ADS)
Miyamura, Norihide
2017-09-01
For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.
Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting
2017-01-01
Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Nela, Luca; Tang, Jianshi; Cao, Qing; Tulevski, George; Han, Shu-Jen
2018-03-14
Artificial "electronic skin" is of great interest for mimicking the functionality of human skin, such as tactile pressure sensing. Several important performance metrics include mechanical flexibility, operation voltage, sensitivity, and accuracy, as well as response speed. In this Letter, we demonstrate a large-area high-performance flexible pressure sensor built on an active matrix of 16 × 16 carbon nanotube thin-film transistors (CNT TFTs). Made from highly purified solution tubes, the active matrix exhibits superior flexible TFT performance with high mobility and large current density, along with a high device yield of nearly 99% over 4 inch sample area. The fully integrated flexible pressure sensor operates within a small voltage range of 3 V and shows superb performance featuring high spatial resolution of 4 mm, faster response than human skin (<30 ms), and excellent accuracy in sensing complex objects on both flat and curved surfaces. This work may pave the road for future integration of high-performance electronic skin in smart robotics and prosthetic solutions.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Accuracy Analysis on Large Blocks of High Resolution Images
NASA Technical Reports Server (NTRS)
Passini, Richardo M.
2007-01-01
Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.
Prototypic Development and Evaluation of a Medium Format Metric Camera
NASA Astrophysics Data System (ADS)
Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.
2018-05-01
Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
Fencl, Pavel; Belohlavek, Otakar; Harustiak, Tomas; Zemanova, Milada
2016-11-01
The aim of the analysis was to assess the accuracy of various FDG-PET/CT parameters in staging lymph nodes after neoadjuvant chemotherapy. In this prospective study, 74 patients with adenocarcinoma of the esophageal-gastric junction were examined by FDG-PET/CT in the course of their neoadjuvant chemotherapy given before surgical treatment. Data from the final FDG-PET/CT examinations were compared with the histology from the surgical specimens (gold standard). The accuracy was calculated for four FDG-PET/CT parameters: (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes. In 74 patients, a total of 1540 lymph nodes were obtained by surgery, and these were grouped into 287 regions according to topographic origin. Five hundred and two nodes were imaged by FDG-PET/CT and were grouped into these same regions for comparison. In the analysis, (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes identified metastases in particular regions with sensitivities of 11.6%, 2.9%, 21.7%, and 13.0%, respectively; specificity was 98.6%, 94.5%, 74.8%, and 93.6%, respectively. The best accuracy of 77.7% reached the parameter of hypermetabolic nodes. Accuracy decreased to 62.0% when also smaller nodes (medium-large) were taken for the parameter of metastases. FDG-PET/CT proved low sensitivity and high specificity. Low sensitivity was based on low detection rate (32.6%) when compared nodes imaged by FDG-PET/CT to nodes found by surgery, and in inability to detect micrometastases. Sensitivity increased when also medium-large LNs were taken for positive, but specificity and accuracy decreased.
Integration deficiencies associated with continuous limb movement sequences in Parkinson's disease.
Park, Jin-Hoon; Stelmach, George E
2009-11-01
The present study examined the extent to which Parkinson's disease (PD) influences integration of continuous limb movement sequences. Eight patients with idiopathic PD and 8 age-matched normal subjects were instructed to perform repetitive sequential aiming movements to specified targets under three-accuracy constraints: 1) low accuracy (W = 7 cm) - minimal accuracy constraint, 2) high accuracy (W = 0.64 cm) - maximum accuracy constraint, and 3) mixed accuracy constraint - one target of high accuracy and another target of low accuracy. The characteristic of sequential movements in the low accuracy condition was mostly cyclical, whereas in the high accuracy condition it was discrete in both groups. When the accuracy constraint was mixed, the sequential movements were executed by assembling discrete and cyclical movements in both groups, suggesting that for PD patients the capability to combine discrete and cyclical movements to meet a task requirement appears to be intact. However, such functional linkage was not as pronounced as was in normal subjects. Close examination of movement from the mixed accuracy condition revealed marked movement hesitations in the vicinity of the large target in PD patients, resulting in a bias toward discrete movement. These results suggest that PD patients may have deficits in ongoing planning and organizing processes during movement execution when the tasks require to assemble various accuracy requirements into more complex movement sequences.
Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method
NASA Astrophysics Data System (ADS)
Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu
2017-10-01
Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.
Gao, Kai; Huang, Lianjie
2017-08-31
The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Huang, Lianjie
The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.
Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil
2005-03-01
We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
NASA Astrophysics Data System (ADS)
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
Absolute flux density calibrations of radio sources: 2.3 GHz
NASA Technical Reports Server (NTRS)
Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.
1977-01-01
A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.
NASA Astrophysics Data System (ADS)
Stavros, E.; Abatzoglou, J. T.; Larkin, N.; McKenzie, D.; Steel, A.
2012-12-01
Across the western United States, the largest wildfires account for a major proportion of the area burned and substantially affect mountain forests and their associated ecosystem services, among which is pristine air quality. These fires commandeer national attention and significant fire suppression resources. Despite efforts to understand the influence of fuel loading, climate, and weather on annual area burned, few studies have focused on understanding what abiotic factors enable and drive the very largest wildfires. We investigated the correlation between both antecedent climate and in-situ biophysical variables and very large (>20,000 ha) fires in the western United States from 1984 to 2009. We built logistic regression models, at the spatial scale of the national Geographic Area Coordination Centers (GACCs), to estimate the probability that a given day is conducive to a very large wildfire. Models vary in accuracy and in which variables are the best predictors. In a case study of the conditions of the High Park Fire, neighboring Fort Collins, Colorado, occurring in early summer 2012, we evaluate the predictive accuracy of the Rocky Mountain model.
Midbond basis functions for weakly bound complexes
NASA Astrophysics Data System (ADS)
Shaw, Robert A.; Hill, J. Grant
2018-06-01
Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.
Accuracy of frozen section in the diagnosis of ovarian tumours.
Toneva, F; Wright, H; Razvi, K
2012-07-01
The purpose of our retrospective study was to assess the accuracy of intraoperative frozen section diagnosis compared to final paraffin diagnosis in ovarian tumours at a gynaecological oncology centre in the UK. We analysed 66 cases and observed that frozen section consultation agreed with final paraffin diagnosis in 59 cases, which provided an accuracy of 89.4%. The overall sensitivity and specificity for all tumours were 85.4% and 100%, respectively. The positive predictive value (PPV) and negative predictive value (NPV) were 100% and 89.4%, respectively. Of the seven cases with discordant results, the majority were large, mucinous tumours, which is in line with previous studies. Our study demonstrated that despite its limitations, intraoperative frozen section has a high accuracy and sensitivity for assessing ovarian tumours; however, care needs to be taken with large, mucinous tumours.
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
Classification of large-scale fundus image data sets: a cloud-computing framework.
Roychowdhury, Sohini
2016-08-01
Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.
Ma, Zhiyuan; Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-03-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy.
Qian, Shinan
2011-01-01
Nmore » anoradian Surface Profilers (SPs) are required for state-of-the-art synchrotron radiation optics and high-precision optical measurements. ano-radian accuracy must be maintained in the large-angle test range. However, the beams' notable lateral motions during tests of most operating profilers, combined with the insufficiencies of their optical components, generate significant errors of ∼ 1 μ rad rms in the measurements. The solution to nano-radian accuracy for the new generation of surface profilers in this range is to apply a scanning optical head, combined with nontilted reference beam. I describe here my comparison of different scan modes and discuss some test results.« less
Bin recycling strategy for improving the histogram precision on GPU
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Rodríguez-Vázquez, Juan José; Vega-Rodríguez, Miguel A.
2016-07-01
Histogram is an easily comprehensible way to present data and analyses. In the current scientific context with access to large volumes of data, the processing time for building histogram has dramatically increased. For this reason, parallel construction is necessary to alleviate the impact of the processing time in the analysis activities. In this scenario, GPU computing is becoming widely used for reducing until affordable levels the processing time of histogram construction. Associated to the increment of the processing time, the implementations are stressed on the bin-count accuracy. Accuracy aspects due to the particularities of the implementations are not usually taken into consideration when building histogram with very large data sets. In this work, a bin recycling strategy to create an accuracy-aware implementation for building histogram on GPU is presented. In order to evaluate the approach, this strategy was applied to the computation of the three-point angular correlation function, which is a relevant function in Cosmology for the study of the Large Scale Structure of Universe. As a consequence of the study a high-accuracy implementation for histogram construction on GPU is proposed.
Ranging performance of satellite laser altimeters
NASA Technical Reports Server (NTRS)
Gardner, Chester S.
1992-01-01
Topographic mapping of the earth, moon and planets can be accomplished with high resolution and accuracy using satellite laser altimeters. These systems employ nanosecond laser pulses and microradian beam divergences to achieve submeter vertical range resolution from orbital altitudes of several hundred kilometers. Here, we develop detailed expressions for the range and pulse width measurement accuracies and use the results to evaluate the ranging performances of several satellite laser altimeters currently under development by NASA for launch during the next decade. Our analysis includes the effects of the target surface characteristics, spacecraft pointing jitter and waveform digitizer characteristics. The results show that ranging accuracy is critically dependent on the pointing accuracy and stability of the altimeter especially over high relief terrain where surface slopes are large. At typical orbital altitudes of several hundred kilometers, single-shot accuracies of a few centimeters can be achieved only when the pointing jitter is on the order of 10 mu rad or less.
Machine learning of molecular properties: Locality and active learning
NASA Astrophysics Data System (ADS)
Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.
2018-06-01
In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.
Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian
2018-04-03
Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.
Research on precision grinding technology of large scale and ultra thin optics
NASA Astrophysics Data System (ADS)
Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua
2018-03-01
The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.
NASA Astrophysics Data System (ADS)
Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.
2010-07-01
European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.
NASA Astrophysics Data System (ADS)
Li, Ping; Jin, Tan; Guo, Zongfu; Lu, Ange; Qu, Meina
2016-10-01
High efficiency machining of large precision optical surfaces is a challenging task for researchers and engineers worldwide. The higher form accuracy and lower subsurface damage helps to significantly reduce the cycle time for the following polishing process, save the cost of production, and provide a strong enabling technology to support the large telescope and laser energy fusion projects. In this paper, employing an Infeed Grinding (IG) mode with a rotary table and a cup wheel, a multi stage grinding process chain, as well as precision compensation technology, a Φ300mm diameter plano mirror is ground by the Schneider Surfacing Center SCG 600 that delivers a new level of quality and accuracy when grinding such large flats. Results show a PV form error of Pt<2 μm, the surface roughness Ra<30 nm and Rz<180 nm, with subsurface damage <20 μm, and a material removal rates of up to 383.2 mm3/s.
High precision predictions for exclusive VH production at the LHC
Li, Ye; Liu, Xiaohui
2014-06-04
We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less
Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data
NASA Technical Reports Server (NTRS)
Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan
2013-01-01
High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-01-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy. PMID:29494543
Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E
2013-07-01
The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.
Semi-Lagrangian particle methods for high-dimensional Vlasov-Poisson systems
NASA Astrophysics Data System (ADS)
Cottet, Georges-Henri
2018-07-01
This paper deals with the implementation of high order semi-Lagrangian particle methods to handle high dimensional Vlasov-Poisson systems. It is based on recent developments in the numerical analysis of particle methods and the paper focuses on specific algorithmic features to handle large dimensions. The methods are tested with uniform particle distributions in particular against a recent multi-resolution wavelet based method on a 4D plasma instability case and a 6D gravitational case. Conservation properties, accuracy and computational costs are monitored. The excellent accuracy/cost trade-off shown by the method opens new perspective for accurate simulations of high dimensional kinetic equations by particle methods.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
Small-size pedestrian detection in large scene based on fast R-CNN
NASA Astrophysics Data System (ADS)
Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu
2018-04-01
Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
High Accuracy, Two-Dimensional Read-Out in Multiwire Proportional Chambers
DOE R&D Accomplishments Database
Charpak, G.; Sauli, F.
1973-02-14
In most applications of proportional chambers, especially in high-energy physics, separate chambers are used for measuring different coordinates. In general one coordinate is obtained by recording the pulses from the anode wires around which avalanches have grown. Several methods have been imagined for obtaining the position of an avalanche along a wire. In this article a method is proposed which leads to the same range of accuracies and may be preferred in some cases. The problem of accurate measurements for large-size chamber is also discussed.
Mizuno, Yosuke; Nakamura, Kentaro
2010-12-01
We investigated the dependences of Brillouin frequency shift (BFS) on strain and temperature in a perfluorinated graded-index polymer optical fiber (PFGI-POF) at 1.55 μm wavelength. They showed negative dependences with coefficients of -121.8 MHz/% and -4.09 MHz/K, respectively, which are -0.2 and -3.5 times as large as those in silica fibers. These unique BFS dependences indicate that the Brillouin scattering in PFGI-POFs has a big potential for strain-insensitive high-accuracy temperature sensing.
Double Resummation for Higgs Production
NASA Astrophysics Data System (ADS)
Bonvini, Marco; Marzani, Simone
2018-05-01
We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.
Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P
2014-04-16
Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.
Structural concepts for large solar concentrators
NASA Technical Reports Server (NTRS)
Hedgepeth, John M.; Miller, Richard K.
1987-01-01
The Sunflower large solar concentrator, developed in the early 1970's, is a salient example of a high-efficiency concentrator. The newly emphasized needs for solar dynamic power on the Space Station and for large, lightweight thermal sources are outlined. Existing concepts for high efficiency reflector surfaces are examined with attention to accuracy needs for concentration rates of 1000 to 3000. Concepts using stiff reflector panels are deemed most likely to exhibit the long-term consistent accuracy necessary for low-orbit operation, particularly for the higher concentration ratios. Quantitative results are shown of the effects of surface errors for various concentration and focal-length diameter ratios. Cost effectiveness is discussed. Principal sources of high cost include the need for various dished panels for paraboloidal reflectors and the expense of ground testing and adjustment. A new configuration is presented addressing both problems, i.e., a deployable Pactruss backup structure with identical panels installed on the structure after deployment in space. Analytical results show that with reasonable pointing errors, this new concept is capable of concentration ratios greater than 2000.
NASA Astrophysics Data System (ADS)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.
2018-03-01
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.
High throughput single cell counting in droplet-based microfluidics.
Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie
2017-05-02
Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
Gain-Compensating Circuit For NDE and Ultrasonics
NASA Technical Reports Server (NTRS)
Kushnick, Peter W.
1987-01-01
High-frequency gain-compensating circuit designed for general use in nondestructive evaluation and ultrasonic measurements. Controls gain of ultrasonic receiver as function of time to aid in measuring attenuation of samples with high losses; for example, human skin and graphite/epoxy composites. Features high signal-to-noise ratio, large signal bandwidth and large dynamic range. Control bandwidth of 5 MHz ensures accuracy of control signal. Currently being used for retrieval of more information from ultrasonic signals sent through composite materials that have high losses, and to measure skin-burn depth in humans.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator
NASA Astrophysics Data System (ADS)
Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin
2016-06-01
Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.
Reversing the Signaled Magnitude Effect in Delayed Matching to Sample: Delay-Specific Remembering?
ERIC Educational Resources Information Center
White, K. Geoffrey; Brown, Glenn S.
2011-01-01
Pigeons performed a delayed matching-to-sample task in which large or small reinforcers for correct remembering were signaled during the retention interval. Accuracy was low when small reinforcers were signaled, and high when large reinforcers were signaled (the signaled magnitude effect). When the reinforcer-size cue was switched from small to…
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Theoretical prediction of welding distortion in large and complex structures
NASA Astrophysics Data System (ADS)
Deng, De-An
2010-06-01
Welding technology is widely used to assemble large thin plate structures such as ships, automobiles, and passenger trains because of its high productivity. However, it is impossible to avoid welding-induced distortion during the assembly process. Welding distortion not only reduces the fabrication accuracy of a weldment, but also decreases the productivity due to correction work. If welding distortion can be predicted using a practical method beforehand, the prediction will be useful for taking appropriate measures to control the dimensional accuracy to an acceptable limit. In this study, a two-step computational approach, which is a combination of a thermoelastic-plastic finite element method (FEM) and an elastic finite element with consideration for large deformation, is developed to estimate welding distortion for large and complex welded structures. Welding distortions in several representative large complex structures, which are often used in shipbuilding, are simulated using the proposed method. By comparing the predictions and the measurements, the effectiveness of the two-step computational approach is verified.
Fully Integrated, Miniature, High-Frequency Flow Probe Utilizing MEMS Leadless SOI Technology
NASA Technical Reports Server (NTRS)
Ned, Alex; Kurtz, Anthony; Shang, Tonghuo; Goodman, Scott; Giemette. Gera (d)
2013-01-01
This work focused on developing, fabricating, and fully calibrating a flowangle probe for aeronautics research by utilizing the latest microelectromechanical systems (MEMS), leadless silicon on insulator (SOI) sensor technology. While the concept of angle probes is not new, traditional devices had been relatively large due to fabrication constraints; often too large to resolve flow structures necessary for modern aeropropulsion measurements such as inlet flow distortions and vortices, secondary flows, etc. Mea surements of this kind demanded a new approach to probe design to achieve sizes on the order of 0.1 in. (.3 mm) diameter or smaller, and capable of meeting demanding requirements for accuracy and ruggedness. This approach invoked the use of stateof- the-art processing techniques to install SOI sensor chips directly onto the probe body, thus eliminating redundancy in sensor packaging and probe installation that have historically forced larger probe size. This also facilitated a better thermal match between the chip and its mount, improving stability and accuracy. Further, the leadless sensor technology with which the SOI sensing element is fabricated allows direct mounting and electrical interconnecting of the sensor to the probe body. This leadless technology allowed a rugged wire-out approach that is performed at the sensor length scale, thus achieving substantial sensor size reductions. The technology is inherently capable of high-frequency and high-accuracy performance in high temperatures and harsh environments.
Method of Lines Transpose an Implicit Vlasov Maxwell Solver for Plasmas
2015-04-17
boundary crossings should be rare. Numerical results for the Bennett pinch are given in Figure 9. In order to resolve large gradients near the center of the...contributing to the large error at the center of the beam due to large gradients there) and with the finite beam cut-off radius and the outflow boundary...usable time step size can be limited by the numerical accuracy of the method when there are large gradients (high-frequency content) in the solution. We
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
Accuracy Assessment of Coastal Topography Derived from Uav Images
NASA Astrophysics Data System (ADS)
Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.
2016-06-01
To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; ...
2017-12-07
We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally.
Chiang, Mao-Hsiung
2010-01-01
This study aims to develop a X-Y dual-axial intelligent servo pneumatic-piezoelectric hybrid actuator for position control with high response, large stroke (250 mm, 200 mm) and nanometer accuracy (20 nm). In each axis, the rodless pneumatic actuator serves to position in coarse stroke and the piezoelectric actuator compensates in fine stroke. Thus, the overall control systems of the single axis become a dual-input single-output (DISO) system. Although the rodless pneumatic actuator has relatively larger friction force, it has the advantage of mechanism for multi-axial development. Thus, the X-Y dual-axial positioning system is developed based on the servo pneumatic-piezoelectric hybrid actuator. In addition, the decoupling self-organizing fuzzy sliding mode control is developed as the intelligent control strategies. Finally, the proposed novel intelligent X-Y dual-axial servo pneumatic-piezoelectric hybrid actuators are implemented and verified experimentally. PMID:22319266
Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2010-07-01
Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
Investigation of Portevin-Le Chatelier effect in 5456 Al-based alloy using digital image correlation
NASA Astrophysics Data System (ADS)
Cheng, Teng; Xu, Xiaohai; Cai, Yulong; Fu, Shihua; Gao, Yue; Su, Yong; Zhang, Yong; Zhang, Qingchuan
2015-02-01
A variety of experimental methods have been proposed for Portevin-Le Chatelier (PLC) effect. They mainly focused on the in-plane deformation. In order to achieve the high-accuracy measurement, three-dimensional digital image correlation (3D-DIC) was employed in this work to investigate the PLC effect in 5456 Al-based alloy. The temporal and spatial evolutions of deformation in the full field of specimen surface were observed. The large deformation of localized necking was determined experimentally. The distributions of out-of-plane displacement over the loading procedure were also obtained. Furthermore, a comparison of measurement accuracy between two-dimensional digital image correlation (2D-DIC) and 3D-DIC was also performed. Due to the theoretical restriction, the measurement accuracy of 2D-DIC decreases with the increase of deformation. A maximum discrepancy of about 20% with 3D-DIC was observed in this work. Therefore, 3D-DIC is actually more essential for the high-accuracy investigation of PLC effect.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
NASA Technical Reports Server (NTRS)
Korb, C. L.; Gentry, Bruce M.
1995-01-01
The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.
Variable curvature mirror having variable thickness: design and fabrication
NASA Astrophysics Data System (ADS)
Zhao, Hui; Xie, Xiaopeng; Xu, Liang; Ding, Jiaoteng; Shen, Le; Gong, Jie
2017-10-01
Variable curvature mirror (VCM) can change its curvature radius dynamically and is usually used to correct the defocus and spherical aberration caused by thermal lens effect to improve the output beam quality of high power solid-state laser. Recently, the probable application of VCM in realizing non-moving element optical zoom imaging in visible band has been paid much attention. The basic requirement for VCM lies in that it should provide a large enough saggitus variation and still maintains a high enough surface figure at the same time. Therefore in this manuscript, by combing the pressurization based actuation with a variable thickness mirror design, the purpose of obtaining large saggitus variation and maintaining quite good surface figure accuracy at the same time could be achieved. A prototype zoom mirror with diameter of 120mm and central thickness of 8mm is designed, fabricated and tested. Experimental results demonstrate that the zoom mirror having an initial surface figure accuracy superior to 1/80λ could provide bigger than 36um saggitus variation and after finishing the curvature variation its surface figure accuracy could still be superior to 1/40λ with the spherical aberration removed, which proves that the effectiveness of the theoretical design.
Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre
2015-01-01
Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID:26402522
Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy
2018-03-31
In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Technique for Very High Order Nonlinear Simulation and Validation
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2001-01-01
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China
NASA Astrophysics Data System (ADS)
Yang, Bo
2016-06-01
Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.
Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei
2017-06-01
To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC <0.7). The CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Smith, Eric E; Kent, David M; Bulsara, Ketan R; Leung, Lester Y; Lichtman, Judith H; Reeves, Mathew J; Towfighi, Amytis; Whiteley, William N; Zahuranec, Darin B
2018-03-01
Endovascular thrombectomy is a highly efficacious treatment for large vessel occlusion (LVO). LVO prediction instruments, based on stroke signs and symptoms, have been proposed to identify stroke patients with LVO for rapid transport to endovascular thrombectomy-capable hospitals. This evidence review committee was commissioned by the American Heart Association/American Stroke Association to systematically review evidence for the accuracy of LVO prediction instruments. Medline, Embase, and Cochrane databases were searched on October 27, 2016. Study quality was assessed with the Quality Assessment of Diagnostic Accuracy-2 tool. Thirty-six relevant studies were identified. Most studies (21 of 36) recruited patients with ischemic stroke, with few studies in the prehospital setting (4 of 36) and in populations that included hemorrhagic stroke or stroke mimics (12 of 36). The most frequently studied prediction instrument was the National Institutes of Health Stroke Scale. Most studies had either some risk of bias or unclear risk of bias. Reported discrimination of LVO mostly ranged from 0.70 to 0.85, as measured by the C statistic. In meta-analysis, sensitivity was as high as 87% and specificity was as high as 90%, but no threshold on any instruments predicted LVO with both high sensitivity and specificity. With a positive LVO prediction test, the probability of LVO could be 50% to 60% (depending on the LVO prevalence in the population), but the probability of LVO with a negative test could still be ≥10%. No scale predicted LVO with both high sensitivity and high specificity. Systems that use LVO prediction instruments for triage will miss some patients with LVO and milder stroke. More prospective studies are needed to assess the accuracy of LVO prediction instruments in the prehospital setting in all patients with suspected stroke, including patients with hemorrhagic stroke and stroke mimics. © 2018 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Fappani, Denis; IDE, Monique
2017-05-01
Many high power laser facilities are in operation all around the world and include various tight optical components such as large focussing lenses. Such lenses exhibit generally long focal lengths which induces some issues for their optical testing during manufacturing and inspection. Indeed, their transmitted wave fronts need to be very accurate and interferometric testing is the baseline to achieve that. But, it is always a problem to manage simultaneously long testing distances and fine accuracies in such interferometry testing. Taking example of the large focusing lenses produced for the Orion experimentation at AWE (UK), the presentation will describe which kind of testing method has been developed to demonstrate simultaneously good performances with sufficiently good repeatability and absolute accuracy. Special emphasis will be made onto the optical manufacturing issues and interferometric testing solutions. Some ZEMAX results presenting the test set-up and the calibration method will be presented as well. The presentation will conclude with a brief overview of the existing "state of the art" at Thales SESO for these technologies.
NASA Astrophysics Data System (ADS)
Zhou, X.; Wang, G.; Yan, B.; Kearns, T.
2016-12-01
Terrestrial laser scanning (TLS) techniques have been proven to be efficient tools to collect three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, the processing and presenting of massive TLS data is always a challenge for research when targeting a large area with high-resolution. This article introduces a workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas in May 2015 were used for the case study. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth DEMs, and quantifying beach and sand dune features (shoreline, cross-dune section, dune ridge, toe, and volume) are presented in this article. According to this investigation, the accuracy of the laser measurements (distance from the scanner to the targets) is within a couple of centimeters. However, the positional accuracy of TLS points with respect to a global coordinate system is about 5 cm, which is dominated by the accuracy of GPS solutions for obtaining the positions of the scanner and reflector. The accuracy of TLS-derived bare-earth DEM is primarily determined by the size of grid cells and roughness of the terrain surface for the case study. A DEM with grid cells of 4m x 1m (shoreline by cross-shore) provides a suitable spatial resolution and accuracy for deriving major beach and dune features.
Vertical Accuracy Evaluation of Aster GDEM2 Over a Mountainous Area Based on Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Guo, D.; Cui, T.
2018-05-01
Global digital elevation models (GDEM) provide elementary information on heights of the Earth's surface and objects on the ground. GDEMs have become an important data source for a range of applications. The vertical accuracy of a GDEM is critical for its applications. Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry are more convenient and more cost-effective. UAV photogrammetry produces the DEM of the survey area with high accuracy and high spatial resolution. As a result, DEMs resulted from UAV photogrammetry can be used for a more detailed and accurate evaluation of the GDEM product. This study investigates the vertical accuracy (in terms of elevation accuracy and systematic errors) of the ASTER GDEM Version 2 dataset over a complex terrain based on UAV photogrammetry. Experimental results show that the elevation errors of ASTER GDEM2 are in normal distribution and the systematic error is quite small. The accuracy of the ASTER GDEM2 coincides well with that reported by the ASTER validation team. The accuracy in the research area is negatively correlated to both the slope of the terrain and the number of stereo observations. This study also evaluates the vertical accuracy of the up-sampled ASTER GDEM2. Experimental results show that the accuracy of the up-sampled ASTER GDEM2 data in the research area is not significantly reduced by the complexity of the terrain. The fine-grained accuracy evaluation of the ASTER GDEM2 is informative for the GDEM-supported UAV photogrammetric applications.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Padé approximant for normal stress differences in large-amplitude oscillatory shear flow
NASA Astrophysics Data System (ADS)
Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.
2018-04-01
Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.
Design and experimental validation of novel 3D optical scanner with zoom lens unit
NASA Astrophysics Data System (ADS)
Huang, Jyun-Cheng; Liu, Chien-Sheng; Chiang, Pei-Ju; Hsu, Wei-Yan; Liu, Jian-Liang; Huang, Bai-Hao; Lin, Shao-Ru
2017-10-01
Optical scanners play a key role in many three-dimensional (3D) printing and CAD/CAM applications. However, existing optical scanners are generally designed to provide either a wide scanning area or a high 3D reconstruction accuracy from a lens with a fixed focal length. In the former case, the scanning area is increased at the expense of the reconstruction accuracy, while in the latter case, the reconstruction performance is improved at the expense of a more limited scanning range. In other words, existing optical scanners compromise between the scanning area and the reconstruction accuracy. Accordingly, the present study proposes a new scanning system including a zoom-lens unit, which combines both a wide scanning area and a high 3D reconstruction accuracy. In the proposed approach, the object is scanned initially under a suitable low-magnification setting for the object size (setting 1), resulting in a wide scanning area but a poor reconstruction resolution in complicated regions of the object. The complicated regions of the object are then rescanned under a high-magnification setting (setting 2) in order to improve the accuracy of the original reconstruction results. Finally, the models reconstructed after each scanning pass are combined to obtain the final reconstructed 3D shape of the object. The feasibility of the proposed method is demonstrated experimentally using a laboratory-built prototype. It is shown that the scanner has a high reconstruction accuracy over a large scanning area. In other words, the proposed optical scanner has significant potential for 3D engineering applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Eric J
The ResStock analysis tool is helping states, municipalities, utilities, and manufacturers identify which home upgrades save the most energy and money. Across the country there's a vast diversity in the age, size, construction practices, installed equipment, appliances, and resident behavior of the housing stock, not to mention the range of climates. These variations have hindered the accuracy of predicting savings for existing homes. Researchers at the National Renewable Energy Laboratory (NREL) developed ResStock. It's a versatile tool that takes a new approach to large-scale residential energy analysis by combining: large public and private data sources, statistical sampling, detailed subhourly buildingmore » simulations, high-performance computing. This combination achieves unprecedented granularity and most importantly - accuracy - in modeling the diversity of the single-family housing stock.« less
Accurate, Rapid Taxonomic Classification of Fungal Large-Subunit rRNA Genes
Liu, Kuan-Liang; Porras-Alfaro, Andrea; Eichorst, Stephanie A.
2012-01-01
Taxonomic and phylogenetic fingerprinting based on sequence analysis of gene fragments from the large-subunit rRNA (LSU) gene or the internal transcribed spacer (ITS) region is becoming an integral part of fungal classification. The lack of an accurate and robust classification tool trained by a validated sequence database for taxonomic placement of fungal LSU genes is a severe limitation in taxonomic analysis of fungal isolates or large data sets obtained from environmental surveys. Using a hand-curated set of 8,506 fungal LSU gene fragments, we determined the performance characteristics of a naïve Bayesian classifier across multiple taxonomic levels and compared the classifier performance to that of a sequence similarity-based (BLASTN) approach. The naïve Bayesian classifier was computationally more rapid (>460-fold with our system) than the BLASTN approach, and it provided equal or superior classification accuracy. Classifier accuracies were compared using sequence fragments of 100 bp and 400 bp and two different PCR primer anchor points to mimic sequence read lengths commonly obtained using current high-throughput sequencing technologies. Accuracy was higher with 400-bp sequence reads than with 100-bp reads. It was also significantly affected by sequence location across the 1,400-bp test region. The highest accuracy was obtained across either the D1 or D2 variable region. The naïve Bayesian classifier provides an effective and rapid means to classify fungal LSU sequences from large environmental surveys. The training set and tool are publicly available through the Ribosomal Database Project (http://rdp.cme.msu.edu/classifier/classifier.jsp). PMID:22194300
A Brief Description of the Kokkos implementation of the SNAP potential in ExaMiniMD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Aidan P.; Trott, Christian Robert
2017-11-01
Within the EXAALT project, the SNAP [1] approach is being used to develop high accuracy potentials for use in large-scale long-time molecular dynamics simulations of materials behavior. In particular, we have developed a new SNAP potential that is suitable for describing the interplay between helium atoms and vacancies in high-temperature tungsten[2]. This model is now being used to study plasma-surface interactions in nuclear fusion reactors for energy production. The high-accuracy of SNAP potentials comes at the price of increased computational cost per atom and increased computational complexity. The increased cost is mitigated by improvements in strong scaling that can bemore » achieved using advanced algorithms [3].« less
NASA Astrophysics Data System (ADS)
Geelen, Christopher D.; Wijnhoven, Rob G. J.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
This research considers gender classification in surveillance environments, typically involving low-resolution images and a large amount of viewpoint variations and occlusions. Gender classification is inherently difficult due to the large intra-class variation and interclass correlation. We have developed a gender classification system, which is successfully evaluated on two novel datasets, which realistically consider the above conditions, typical for surveillance. The system reaches a mean accuracy of up to 90% and approaches our human baseline of 92.6%, proving a high-quality gender classification system. We also present an in-depth discussion of the fundamental differences between SVM and RF classifiers. We conclude that balancing the degree of randomization in any classifier is required for the highest classification accuracy. For our problem, an RF-SVM hybrid classifier exploiting the combination of HSV and LBP features results in the highest classification accuracy of 89.9 0.2%, while classification computation time is negligible compared to the detection time of pedestrians.
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
Accuracy and safety of ward based pleural ultrasound in the Australian healthcare system.
Hammerschlag, Gary; Denton, Matthew; Wallbridge, Peter; Irving, Louis; Hew, Mark; Steinfort, Daniel
2017-04-01
Ultrasound has been shown to improve the accuracy and safety of pleural procedures. Studies to date have been performed in large, specialized units, where pleural procedures are performed by a small number of highly specialized physicians. There are no studies examining the safety and accuracy of ultrasound in the Australian healthcare system where procedures are performed by junior doctors with a high staff turnover. We performed a retrospective review of the ultrasound database in the Respiratory Department at the Royal Melbourne Hospital to determine accuracy and complications associated pleural procedures. A total of 357 ultrasounds were performed between October 2010 and June 2013. Accuracy of pleural procedures was 350 of 356 (98.3%). Aspiration of pleural fluid was successful in 121 of 126 (96%) of patients. Two (0.9%) patients required chest tube insertion for management of pneumothorax. There were no recorded pleural infections, haemorrhage or viscera puncture. Ward-based ultrasound for pleural procedures is safe and accurate when performed by appropriately trained and supported junior medical officers. Our findings support this model of pleural service care in the Australian healthcare system. © 2016 Asian Pacific Society of Respirology.
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
NASA Astrophysics Data System (ADS)
Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong
2017-05-01
Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.
Compact and Hybrid Feature Description for Building Extraction
NASA Astrophysics Data System (ADS)
Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.
2017-05-01
Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.
Genome sequencing in microfabricated high-density picolitre reactors.
Margulies, Marcel; Egholm, Michael; Altman, William E; Attiya, Said; Bader, Joel S; Bemben, Lisa A; Berka, Jan; Braverman, Michael S; Chen, Yi-Ju; Chen, Zhoutao; Dewell, Scott B; Du, Lei; Fierro, Joseph M; Gomes, Xavier V; Godwin, Brian C; He, Wen; Helgesen, Scott; Ho, Chun Heen; Ho, Chun He; Irzyk, Gerard P; Jando, Szilveszter C; Alenquer, Maria L I; Jarvie, Thomas P; Jirage, Kshama B; Kim, Jong-Bum; Knight, James R; Lanza, Janna R; Leamon, John H; Lefkowitz, Steven M; Lei, Ming; Li, Jing; Lohman, Kenton L; Lu, Hong; Makhijani, Vinod B; McDade, Keith E; McKenna, Michael P; Myers, Eugene W; Nickerson, Elizabeth; Nobile, John R; Plant, Ramona; Puc, Bernard P; Ronan, Michael T; Roth, George T; Sarkis, Gary J; Simons, Jan Fredrik; Simpson, John W; Srinivasan, Maithreyan; Tartaro, Karrie R; Tomasz, Alexander; Vogt, Kari A; Volkmer, Greg A; Wang, Shally H; Wang, Yong; Weiner, Michael P; Yu, Pengguang; Begley, Richard F; Rothberg, Jonathan M
2005-09-15
The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.
Rossini, Paolo M; Buscema, Massimo; Capriotti, Massimiliano; Grossi, Enzo; Rodriguez, Guido; Del Percio, Claudio; Babiloni, Claudio
2008-07-01
It has been shown that a new procedure (implicit function as squashing time, IFAST) based on artificial neural networks (ANNs) is able to compress eyes-closed resting electroencephalographic (EEG) data into spatial invariants of the instant voltage distributions for an automatic classification of mild cognitive impairment (MCI) and Alzheimer's disease (AD) subjects with classification accuracy of individual subjects higher than 92%. Here we tested the hypothesis that this is the case also for the classification of individual normal elderly (Nold) vs. MCI subjects, an important issue for the screening of large populations at high risk of AD. Eyes-closed resting EEG data (10-20 electrode montage) were recorded in 171 Nold and in 115 amnesic MCI subjects. The data inputs for the classification by IFAST were the weights of the connections within a nonlinear auto-associative ANN trained to generate the instant voltage distributions of 60-s artifact-free EEG data. The most relevant features were selected and coincidently the dataset was split into two halves for the final binary classification (training and testing) performed by a supervised ANN. The classification of the individual Nold and MCI subjects reached 95.87% of sensitivity and 91.06% of specificity (93.46% of accuracy). These results indicate that IFAST can reliably distinguish eyes-closed resting EEG in individual Nold and MCI subjects. IFAST may be used for large-scale periodic screening of large populations at risk of AD and personalized care.
Audit of accuracy of clinical coding in oral surgery.
Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R
2014-10-01
We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.
Design, fabrication, and testing of duralumin zoom mirror with variable thickness
NASA Astrophysics Data System (ADS)
Hui, Zhao; Xie, Xiaopeng; Xu, Liang; Ding, Jiaoteng; Shen, Le; Liu, Meiying; Gong, Jie
2016-10-01
Zoom mirror is a kind of active optical component that can change its curvature radius dynamically. Normally, zoom mirror is used to correct the defocus and spherical aberration caused by thermal lens effect to improve the beam quality of high power solid-state laser since that component was invented. Recently, the probable application of zoom mirror in realizing non-moving element optical zoom imaging in visible band has been paid much attention. With the help of optical leveraging effect, the slightly changed local optical power caused by curvature variation of zoom mirror could be amplified to generate a great alteration of system focal length without moving elements involved in, but in this application the shorter working wavelength and higher surface figure accuracy requirement make the design and fabrication of such a zoom mirror more difficult. Therefore, the key to realize non-moving element optical zoom imaging in visible band lies in zoom mirror which could provide a large enough saggitus variation while still maintaining a high enough surface figure. Although the annular force based actuation could deform a super-thin mirror having a constant thickness to generate curvature variation, it is quite difficult to maintain a high enough surface figure accuracy and this phenomenon becomes even worse when the diameter and the radius-thickness ratio become bigger. In this manuscript, by combing the pressurization based actuation with a variable thickness mirror design, the purpose of obtaining large saggitus variation and maintaining quite good surface figure accuracy at the same time could be achieved. A prototype zoom mirror with diameter of 120mm and central thickness of 8mm is designed, fabricated and tested. Experimental results demonstrate that the zoom mirror having an initial surface figure accuracy superior to 1/50λ could provide at least 21um saggitus variation and after finishing the curvature variation its surface figure accuracy could still be superior to 1/20λ, which proves that the effectiveness of the theoretical design.
NASA Astrophysics Data System (ADS)
Bramhe, V. S.; Ghosh, S. K.; Garg, P. K.
2018-04-01
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
NASA Technical Reports Server (NTRS)
Farmer, Jeffrey T.; Wahls, Deborah M.; Wright, Robert L.
1990-01-01
The global change technology initiative calls for a geostationary platform for Earth science monitoring. One of the major science instruments is the high frequency microwave sounder (HFMS) which uses a large diameter, high resolution, high frequency microwave antenna. This antenna's size and required accuracy dictates the need for a segmented reflector. On-orbit disturbances may be a significant factor in its design. A study was performed to examine the effects of the geosynchronous thermal environment on the performance of the strongback structure for a proposed antenna concept for this application. The study included definition of the strongback and a corresponding numerical model to be used in the thermal and structural analyses definition of the thermal environment, determination of structural element temperature throughout potential orbits, estimation of resulting thermal distortions, and assessment of the structure's capability to meet surface accuracy requirements. Analyses show that shadows produced by the antenna reflector surface play a major role in increasing thermal distortions. Through customization of surface coating and element expansion characteristics, the segmented reflector concept can meet the tight surface accuracy requirements.
Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions.
Zhang, Jingji; Pavlov, Michael Y; Ehrenberg, Måns
2018-02-16
We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of 'monitoring bases' A1492, A1493 and G530 in 16S rRNA in favor of their 'activated' state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the 'intrinsic selectivity' of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome.
Fu, Yong-Bi
2014-01-01
Genotyping by sequencing (GBS) recently has emerged as a promising genomic approach for assessing genetic diversity on a genome-wide scale. However, concerns are not lacking about the uniquely large unbalance in GBS genotype data. Although some genotype imputation has been proposed to infer missing observations, little is known about the reliability of a genetic diversity analysis of GBS data, with up to 90% of observations missing. Here we performed an empirical assessment of accuracy in genetic diversity analysis of highly incomplete single nucleotide polymorphism genotypes with imputations. Three large single-nucleotide polymorphism genotype data sets for corn, wheat, and rice were acquired, and missing data with up to 90% of missing observations were randomly generated and then imputed for missing genotypes with three map-independent imputation methods. Estimating heterozygosity and inbreeding coefficient from original, missing, and imputed data revealed variable patterns of bias from assessed levels of missingness and genotype imputation, but the estimation biases were smaller for missing data without genotype imputation. The estimates of genetic differentiation were rather robust up to 90% of missing observations but became substantially biased when missing genotypes were imputed. The estimates of topology accuracy for four representative samples of interested groups generally were reduced with increased levels of missing genotypes. Probabilistic principal component analysis based imputation performed better in terms of topology accuracy than those analyses of missing data without genotype imputation. These findings are not only significant for understanding the reliability of the genetic diversity analysis with respect to large missing data and genotype imputation but also are instructive for performing a proper genetic diversity analysis of highly incomplete GBS or other genotype data. PMID:24626289
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Rotation-invariant convolutional neural networks for galaxy morphology prediction
NASA Astrophysics Data System (ADS)
Dieleman, Sander; Willett, Kyle W.; Dambre, Joni
2015-06-01
Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time consuming and does not scale to large (≳104) numbers of images. Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images. We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project. For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy (>99 per cent) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts' workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the Large Synoptic Survey Telescope.
Large-baseline InSAR for precise topographic mapping: a framework for TanDEM-X large-baseline data
NASA Astrophysics Data System (ADS)
Pinheiro, Muriel; Reigber, Andreas; Moreira, Alberto
2017-09-01
The global Digital Elevation Model (DEM) resulting from the TanDEM-X mission provides information about the world topography with outstanding precision. In fact, performance analysis carried out with the already available data have shown that the global product is well within the requirements of 10 m absolute vertical accuracy and 2 m relative vertical accuracy for flat to moderate terrain. The mission's science phase took place from October 2014 to December 2015. During this phase, bistatic acquisitions with across-track separation between the two satellites up to 3.6 km at the equator were commanded. Since the relative vertical accuracy of InSAR derived elevation models is, in principle, inversely proportional to the system baseline, the TanDEM-X science phase opened the doors for the generation of elevation models with improved quality with respect to the standard product. However, the interferometric processing of the large-baseline data is troublesome due to the increased volume decorrelation and very high frequency of the phase variations. Hence, in order to fully profit from the increased baseline, sophisticated algorithms for the interferometric processing, and, in particular, for the phase unwrapping have to be considered. This paper proposes a novel dual-baseline region-growing framework for the phase unwrapping of the large-baseline interferograms. Results from two experiments with data from the TanDEM-X science phase are discussed, corroborating the expected increased level of detail of the large-baseline DEMs.
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains
USDA-ARS?s Scientific Manuscript database
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which i...
Localization of multiple defects using the compact phased array (CPA) method
NASA Astrophysics Data System (ADS)
Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.
2018-01-01
Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.
Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana
2018-05-24
In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.
Hesford, Andrew J; Astheimer, Jeffrey P; Greengard, Leslie F; Waag, Robert C
2010-02-01
A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method.
Hesford, Andrew J.; Astheimer, Jeffrey P.; Greengard, Leslie F.; Waag, Robert C.
2010-01-01
A multiple-scattering approach is presented to compute the solution of the Helmholtz equation when a number of spherical scatterers are nested in the interior of an acoustically large enclosing sphere. The solution is represented in terms of partial-wave expansions, and a linear system of equations is derived to enforce continuity of pressure and normal particle velocity across all material interfaces. This approach yields high-order accuracy and avoids some of the difficulties encountered when using integral equations that apply to surfaces of arbitrary shape. Calculations are accelerated by using diagonal translation operators to compute the interactions between spheres when the operators are numerically stable. Numerical results are presented to demonstrate the accuracy and efficiency of the method. PMID:20136208
Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method
NASA Astrophysics Data System (ADS)
Xin, L.
2018-04-01
Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.
NASA Astrophysics Data System (ADS)
Snavely, Rachel A.
Focusing on the semi-arid and highly disturbed landscape of San Clemente Island, California, this research tests the effectiveness of incorporating a hierarchal object-based image analysis (OBIA) approach with high-spatial resolution imagery and light detection and range (LiDAR) derived canopy height surfaces for mapping vegetation communities. The study is part of a large-scale research effort conducted by researchers at San Diego State University's (SDSU) Center for Earth Systems Analysis Research (CESAR) and Soil Ecology and Restoration Group (SERG), to develop an updated vegetation community map which will support both conservation and management decisions on Naval Auxiliary Landing Field (NALF) San Clemente Island. Trimble's eCognition Developer software was used to develop and generate vegetation community maps for two study sites, with and without vegetation height data as input. Overall and class-specific accuracies were calculated and compared across the two classifications. The highest overall accuracy (approximately 80%) was observed with the classification integrating airborne visible and near infrared imagery having very high spatial resolution with a LiDAR derived canopy height model. Accuracies for individual vegetation classes differed between both classification methods, but were highest when incorporating the LiDAR digital surface data. The addition of a canopy height model, however, yielded little difference in classification accuracies for areas of very dense shrub cover. Overall, the results show the utility of the OBIA approach for mapping vegetation with high spatial resolution imagery, and emphasizes the advantage of both multi-scale analysis and digital surface data for accuracy characterizing highly disturbed landscapes. The integrated imagery and digital canopy height model approach presented both advantages and limitations, which have to be considered prior to its operational use in mapping vegetation communities.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
Accuracy and Reliability of the Klales et al. (2012) Morphoscopic Pelvic Sexing Method.
Lesciotto, Kate M; Doershuk, Lily J
2018-01-01
Klales et al. (2012) devised an ordinal scoring system for the morphoscopic pelvic traits described by Phenice (1969) and used for sex estimation of skeletal remains. The aim of this study was to test the accuracy and reliability of the Klales method using a large sample from the Hamann-Todd collection (n = 279). Two observers were blinded to sex, ancestry, and age and used the Klales et al. method to estimate the sex of each individual. Sex was correctly estimated for females with over 95% accuracy; however, the male allocation accuracy was approximately 50%. Weighted Cohen's kappa and intraclass correlation coefficient analysis for evaluating intra- and interobserver error showed moderate to substantial agreement for all traits. Although each trait can be reliably scored using the Klales method, low accuracy rates and high sex bias indicate better trait descriptions and visual guides are necessary to more accurately reflect the range of morphological variation. © 2017 American Academy of Forensic Sciences.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-09-09
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
An onboard data analysis method to track the seasonal polar caps on Mars
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Castano, Rebecca; Chien, Steve; Ivanov, Anton B.; Pounders, Erik; Titus, Timothy N.
2005-01-01
In this paper, we evaluate our method on uncalibrated THEMIS data and find 1) agreement with manual cap edge identifications to within 28.2 km, and 2) high accuracy even with a reduced context window, yielding large reductions in memory requirements.
Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of derived wetland maps were limited or often unsatisfactory largely due to the relatively coarse spatial resolution of conventional satellite imagery. This re...
Retrieval of high-spectral-resolution lidar for atmospheric aerosol optical properties profiling
NASA Astrophysics Data System (ADS)
Liu, Dong; Luo, Jing; Yang, Yongying; Cheng, Zhongtao; Zhang, Yupeng; Zhou, Yudi; Duan, Lulin; Su, Lin
2015-10-01
High-spectral-resolution lidars (HSRLs) are increasingly being developed for atmospheric aerosol remote sensing applications due to the straightforward and independent retrieval of aerosol optical properties without reliance on assumptions about lidar ratio. In HSRL technique, spectral discrimination between scattering from molecules and aerosol particles is one of the most critical processes, which needs to be accomplished by means of a narrowband spectroscopic filter. To ensure a high retrieval accuracy of an HSRL system, the high-quality design of its spectral discrimination filter should be made. This paper reviews the available algorithms that were proposed for HSRLs and makes a general accuracy analysis of the HSRL technique focused on the spectral discrimination, in order to provide heuristic guidelines for the reasonable design of the spectral discrimination filter. We introduce a theoretical model for retrieval error evaluation of an HSRL instrument with general three-channel configuration. Monte Carlo (MC) simulations are performed to validate the correctness of the theoretical model. Results from both the model and MC simulations agree very well, and they illustrate one important, although not well realized fact: a large molecular transmittance and a large spectral discrimination ratio (SDR, i.e., ratio of the molecular transmittance to the aerosol transmittance) are beneficial t o promote the retrieval accuracy. The application of the conclusions obtained in this paper in the designing of a new type of spectroscopic filter, that is, the field-widened Michelson interferometer, is illustrated in detail. These works are with certain universality and expected to be useful guidelines for HSRL community, especially when choosing or designing the spectral discrimination filter.
Quality Analysis of Open Street Map Data
NASA Astrophysics Data System (ADS)
Wang, M.; Li, Q.; Hu, Q.; Zhou, M.
2013-05-01
Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
The Power of Ground User in Recommender Systems
Zhou, Yanbo; Lü, Linyuan; Liu, Weiping; Zhang, Jianlin
2013-01-01
Accuracy and diversity are two important aspects to evaluate the performance of recommender systems. Two diffusion-based methods were proposed respectively inspired by the mass diffusion (MD) and heat conduction (HC) processes on networks. It has been pointed out that MD has high recommendation accuracy yet low diversity, while HC succeeds in seeking out novel or niche items but with relatively low accuracy. The accuracy-diversity dilemma is a long-term challenge in recommender systems. To solve this problem, we introduced a background temperature by adding a ground user who connects to all the items in the user-item bipartite network. Performing the HC algorithm on the network with ground user (GHC), it showed that the accuracy can be largely improved while keeping the diversity. Furthermore, we proposed a weighted form of the ground user (WGHC) by assigning some weights to the newly added links between the ground user and the items. By turning the weight as a free parameter, an optimal value subject to the highest accuracy is obtained. Experimental results on three benchmark data sets showed that the WGHC outperforms the state-of-the-art method MD for both accuracy and diversity. PMID:23936380
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
Manufacture of ultra high precision aerostatic bearings based on glass guide
NASA Astrophysics Data System (ADS)
Guo, Meng; Dai, Yifan; Peng, Xiaoqiang; Tie, Guipeng; Lai, Tao
2017-10-01
The aerostatic guide in the traditional three-coordinate measuring machine and profilometer generally use metal or ceramics material. Limited by the guide processing precision, the measurement accuracy of these traditional instruments is around micro-meter level. By selection of optical materials as guide material, optical processing method and laser interference measurement can be introduced to the traditional aerostatic bearings manufacturing field. By using the large aperture wave-front interference measuring equipment , the shape and position error of the glass guide can be obtained in high accuracy and then it can be processed to 0.1μm or even better with the aid of Magnetorheological Finishing(MRF) and Computer Controlled Optical Surfacing (CCOS) process and other modern optical processing method, so the accuracy of aerostatic bearings can be fundamentally improved and ultra high precision coordinate measuring can be achieved. This paper introduces the fabrication and measurement process of the glass guide by K9 with 300mm measuring range, and its working surface accuracy is up to 0.1μm PV, the verticality and parallelism error between the two guide rail face is better than 2μm, and the straightness of the aerostatic bearings by this K9 glass guide is up to 40nm after error compensation.
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Bank gully extraction from DEMs utilizing the geomorphologic features of a loess hilly area in China
NASA Astrophysics Data System (ADS)
Yang, Xin; Na, Jiaming; Tang, Guoan; Wang, Tingting; Zhu, Axing
2018-04-01
As one of most active gully types in the Chinese Loess Plateau, bank gullies generally indicate soil loss and land degradation. This study addressed the lack of detailed, large scale monitoring of bank gullies and proposed a semi-automatic method for extracting bank gullies, given typical topographic features based on 5 m resolution DEMs. First, channel networks, including bank gullies, are extracted through an iterative channel burn-in algorithm. Second, gully heads are correctly positioned based on the spatial relationship between gully heads and their corresponding gully shoulder lines. Third, bank gullies are distinguished from other gullies using the newly proposed topographic measurement of "relative gully depth (RGD)." The experimental results from the loess hilly area of the Linjiajian watershed in the Chinese Loess Plateau show that the producer accuracy reaches 87.5%. The accuracy is affected by the DEM resolution and RGD parameters, as well as the accuracy of the gully shoulder line. The application in the Madigou watershed with a high DEM resolution validated the duplicability of this method in other areas. The overall performance shows that bank gullies can be extracted with acceptable accuracy over a large area, which provides essential information for research on soil erosion, geomorphology, and environmental ecology.
Accuracy of binary black hole waveform models for aligned-spin binaries
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Chu, Tony; Fong, Heather; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-05-01
Coalescing binary black holes are among the primary science targets for second generation ground-based gravitational wave detectors. Reliable gravitational waveform models are central to detection of such systems and subsequent parameter estimation. This paper performs a comprehensive analysis of the accuracy of recent waveform models for binary black holes with aligned spins, utilizing a new set of 84 high-accuracy numerical relativity simulations. Our analysis covers comparable mass binaries (mass-ratio 1 ≤q ≤3 ), and samples independently both black hole spins up to a dimensionless spin magnitude of 0.9 for equal-mass binaries and 0.85 for unequal mass binaries. Furthermore, we focus on the high-mass regime (total mass ≳50 M⊙ ). The two most recent waveform models considered (PhenomD and SEOBNRv2) both perform very well for signal detection, losing less than 0.5% of the recoverable signal-to-noise ratio ρ , except that SEOBNRv2's efficiency drops slightly for both black hole spins aligned at large magnitude. For parameter estimation, modeling inaccuracies of the SEOBNRv2 model are found to be smaller than systematic uncertainties for moderately strong GW events up to roughly ρ ≲15 . PhenomD's modeling errors are found to be smaller than SEOBNRv2's, and are generally irrelevant for ρ ≲20 . Both models' accuracy deteriorates with increased mass ratio, and when at least one black hole spin is large and aligned. The SEOBNRv2 model shows a pronounced disagreement with the numerical relativity simulation in the merger phase, for unequal masses and simultaneously both black hole spins very large and aligned. Two older waveform models (PhenomC and SEOBNRv1) are found to be distinctly less accurate than the more recent PhenomD and SEOBNRv2 models. Finally, we quantify the bias expected from all four waveform models during parameter estimation for several recovered binary parameters: chirp mass, mass ratio, and effective spin.
Optimization of pencil beam f-theta lens for high-accuracy metrology
NASA Astrophysics Data System (ADS)
Peng, Chuanqian; He, Yumei; Wang, Jie
2018-01-01
Pencil beam deflectometric profilers are common instruments for high-accuracy surface slope metrology of x-ray mirrors in synchrotron facilities. An f-theta optical system is a key optical component of the deflectometric profilers and is used to perform the linear angle-to-position conversion. Traditional optimization procedures of the f-theta systems are not directly related to the angle-to-position conversion relation and are performed with stops of large size and a fixed working distance, which means they may not be suitable for the design of f-theta systems working with a small-sized pencil beam within a working distance range for ultra-high-accuracy metrology. If an f-theta system is not well-designed, aberrations of the f-theta system will introduce many systematic errors into the measurement. A least-squares' fitting procedure was used to optimize the configuration parameters of an f-theta system. Simulations using ZEMAX software showed that the optimized f-theta system significantly suppressed the angle-to-position conversion errors caused by aberrations. Any pencil-beam f-theta optical system can be optimized with the help of this optimization method.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Geoid determination by airborne gravimetry - principles and applications
NASA Astrophysics Data System (ADS)
Forsberg, R.; Olesen, A. V.
2009-12-01
The operational development of long-range airborne gravimetry has meant that large areas can be covered in a short time frame with high-quality medium-wavelength gravity field data, perfectly matching the needs of geoid determination. Geoid from a combination of surface, airborne and satellite data not only is able to cover the remaining large data voids on the earth, notably Antarctica and tropical jungle regions, but also provide seamless coverage across the coastal zone, and tie in older marine and land gravity data. Airborne gravity can therefore provide essential data for GPS applications both on land and at sea, e.g. for marine construction projects such as bridges, wind mill farms etc. Current operational accuracies with the DTU-Space/UiB airborne system are in the 1-2 mGal range, which translates into geoid accuracies of 5-10 cm, dependent on track spacing. In the paper we will outline the current accuracy of airborne gravity and geoid determination, and show examples from recent international airborne gravity campaigns, aimed at either providing national survey infrastructure, or scientific applications for e.g. oceanography or sea-ice thickness determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashida, Misa; Malac, Marek; Egerton, Ray F.
Electron tomography is a method whereby a three-dimensional reconstruction of a nanoscale object is obtained from a series of projected images measured in a transmission electron microscope. We developed an electron-diffraction method to measure the tilt and azimuth angles, with Kikuchi lines used to align a series of diffraction patterns obtained with each image of the tilt series. Since it is based on electron diffraction, the method is not affected by sample drift and is not sensitive to sample thickness, whereas tilt angle measurement and alignment using fiducial-marker methods are affected by both sample drift and thickness. The accuracy ofmore » the diffraction method benefits reconstructions with a large number of voxels, where both high spatial resolution and a large field of view are desired. The diffraction method allows both the tilt and azimuth angle to be measured, while fiducial marker methods typically treat the tilt and azimuth angle as an unknown parameter. The diffraction method can be also used to estimate the accuracy of the fiducial marker method, and the sample-stage accuracy. A nano-dot fiducial marker measurement differs from a diffraction measurement by no more than ±1°.« less
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Analysis on the dynamic error for optoelectronic scanning coordinate measurement network
NASA Astrophysics Data System (ADS)
Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie
2018-01-01
Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.
Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions
Zhang, Jingji
2018-01-01
Abstract We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of ‘monitoring bases’ A1492, A1493 and G530 in 16S rRNA in favor of their ‘activated’ state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the ‘intrinsic selectivity’ of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome. PMID:29267976
Thailand national programme of the Earth Resources Technology Satellite
NASA Technical Reports Server (NTRS)
Sabhasri, S. (Principal Investigator)
1976-01-01
The author has identified the following significant results. The study on locating hill tribe villages from LANDSAT imagery was successful and exceeded the initial expectations. Results of the study on land use and forest mapping using Skylab data demonstrated the capability and feasibility of large scale mapping with high accuracy.
Skinfold Assessment: Accuracy and Application
ERIC Educational Resources Information Center
Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.
2006-01-01
Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…
Protocol for emergency EPR dosimetry in fingernails
USDA-ARS?s Scientific Manuscript database
There is an increased need for after-the fact dosimetry because of the high risk of radiation exposures due to terrorism or accidents. In case of such an event, a method is needed to make measurements of dose in a large number of individuals rapidly and with sufficient accuracy to facilitate effect...
Accuracy assessment of NOAA's daily reference evapotranspiration maps for the Texas High Plains
USDA-ARS?s Scientific Manuscript database
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference ET for the continental U.S. using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large scale spatial representation for reference ET, which is essential for regional scal...
Greenhouse, Bryan; Dokomajilar, Christian; Hubbard, Alan; Rosenthal, Philip J; Dorsey, Grant
2007-09-01
Antimalarial clinical trials use genotyping techniques to distinguish new infection from recrudescence. In areas of high transmission, the accuracy of genotyping may be compromised due to the high number of infecting parasite strains. We compared the accuracies of genotyping methods, using up to six genotyping markers, to assign outcomes for two large antimalarial trials performed in areas of Africa with different transmission intensities. We then estimated the probability of genotyping misclassification and its effect on trial results. At a moderate-transmission site, three genotyping markers were sufficient to generate accurate estimates of treatment failure. At a high-transmission site, even with six markers, estimates of treatment failure were 20% for amodiaquine plus artesunate and 17% for artemether-lumefantrine, regimens expected to be highly efficacious. Of the observed treatment failures for these two regimens, we estimated that at least 45% and 35%, respectively, were new infections misclassified as recrudescences. Increasing the number of genotyping markers improved the ability to distinguish new infection from recrudescence at a moderate-transmission site, but using six markers appeared inadequate at a high-transmission site. Genotyping-adjusted estimates of treatment failure from high-transmission sites may represent substantial overestimates of the true risk of treatment failure.
High-throughput accurate-wavelength lens-based visible spectrometer.
Bell, Ronald E; Scotti, Filippo
2010-10-01
A scanning visible spectrometer has been prototyped to complement fixed-wavelength transmission grating spectrometers for charge exchange recombination spectroscopy. Fast f/1.8 200 mm commercial lenses are used with a large 2160 mm(-1) grating for high throughput. A stepping-motor controlled sine drive positions the grating, which is mounted on a precision rotary table. A high-resolution optical encoder on the grating stage allows the grating angle to be measured with an absolute accuracy of 0.075 arc sec, corresponding to a wavelength error ≤0.005 Å. At this precision, changes in grating groove density due to thermal expansion and variations in the refractive index of air are important. An automated calibration procedure determines all the relevant spectrometer parameters to high accuracy. Changes in bulk grating temperature, atmospheric temperature, and pressure are monitored between the time of calibration and the time of measurement to ensure a persistent wavelength calibration.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
High accuracy transit photometry of the planet OGLE-TR-113b with a new deconvolution-based method
NASA Astrophysics Data System (ADS)
Gillon, M.; Pont, F.; Moutou, C.; Bouchy, F.; Courbin, F.; Sohy, S.; Magain, P.
2006-11-01
A high accuracy photometry algorithm is needed to take full advantage of the potential of the transit method for the characterization of exoplanets, especially in deep crowded fields. It has to reduce to the lowest possible level the negative influence of systematic effects on the photometric accuracy. It should also be able to cope with a high level of crowding and with large-scale variations of the spatial resolution from one image to another. A recent deconvolution-based photometry algorithm fulfills all these requirements, and it also increases the resolution of astronomical images, which is an important advantage for the detection of blends and the discrimination of false positives in transit photometry. We made some changes to this algorithm to optimize it for transit photometry and used it to reduce NTT/SUSI2 observations of two transits of OGLE-TR-113b. This reduction has led to two very high precision transit light curves with a low level of systematic residuals, used together with former photometric and spectroscopic measurements to derive new stellar and planetary parameters in excellent agreement with previous ones, but significantly more precise.
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2015-01-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541
NASA Astrophysics Data System (ADS)
Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.
2014-12-01
Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.
Fast RBF OGr for solving PDEs on arbitrary surfaces
NASA Astrophysics Data System (ADS)
Piret, Cécile; Dunn, Jarrett
2016-10-01
The Radial Basis Functions Orthogonal Gradients method (RBF-OGr) was introduced in [1] to discretize differential operators defined on arbitrary manifolds defined only by a point cloud. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent complex geometries in any spatial dimension. A large limitation of the RBF-OGr method was its large computational complexity, which greatly restricted the size of the point cloud. In this paper, we apply the RBF-Finite Difference (RBF-FD) technique to the RBF-OGr method for building sparse differentiation matrices discretizing continuous differential operators such as the Laplace-Beltrami operator. This method can be applied to solving PDEs on arbitrary surfaces embedded in ℛ3. We illustrate the accuracy of our new method by solving the heat equation on the unit sphere.
NASA Astrophysics Data System (ADS)
Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing
2017-01-01
This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.
Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1991-01-01
A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.
Progress in ion figuring large optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, L.N.
1995-12-31
Ion figuring is an optical fabrication method that provides deterministic surface figure error correction of previously polished surfaces by using a directed, inert and neutralized ion beam to physically sputter material from the optic surface. Considerable process development has been completed and numerous large optical elements have been successfully final figured using this process. The process has been demonstrated to be highly deterministic, capable of completing complex-shaped optical element configurations in only a few process iterations, and capable of achieving high-quality surface figure accuracy`s. A review of the neutral ion beam figuring process will be provided, along with discussion ofmore » processing results for several large optics. Most notably, processing of Keck 10 meter telescope primary mirror segments and correction of one other large optic where a convergence ratio greater than 50 was demonstrated during the past year will be discussed. Also, the process has been demonstrated on various optical materials, including fused silica, ULE, zerodur, silicon and chemically vapor deposited (CVD) silicon carbide. Where available, results of surface finish changes caused by the ion bombardment process will be discussed. Most data have shown only limited degradation of the optic surface finish, and that it is generally a function of the quality of mechanical polishing prior to ion figuring. Removals of from 5 to 10 {mu}m on some materials are acceptable without adversely altering the surface finish specularity.« less
High-density marker imputation accuracy in sixteen French cattle breeds.
Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal
2013-09-03
Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations.
High-density marker imputation accuracy in sixteen French cattle breeds
2013-01-01
Background Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777 609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Methods Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Results Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. Conclusion In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations. PMID:24004563
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Flat-Sky Pseudo-Cls Analysis for Weak Gravitational Lensing
NASA Astrophysics Data System (ADS)
Asgari, Marika; Taylor, Andy; Joachimi, Benjamin; Kitching, Thomas D.
2018-05-01
We investigate the use of estimators of weak lensing power spectra based on a flat-sky implementation of the 'Pseudo-CI' (PCl) technique, where the masked shear field is transformed without regard for masked regions of sky. This masking mixes power, and 'E'-convergence and 'B'-modes. To study the accuracy of forward-modelling and full-sky power spectrum recovery we consider both large-area survey geometries, and small-scale masking due to stars and a checkerboard model for field-of-view gaps. The power spectrum for the large-area survey geometry is sparsely-sampled and highly oscillatory, which makes modelling problematic. Instead, we derive an overall calibration for large-area mask bias using simulated fields. The effects of small-area star masks can be accurately corrected for, while the checkerboard mask has oscillatory and spiky behaviour which leads to percent biases. Apodisation of the masked fields leads to increased biases and a loss of information. We find that we can construct an unbiased forward-model of the raw PCls, and recover the full-sky convergence power to within a few percent accuracy for both Gaussian and lognormal-distributed shear fields. Propagating this through to cosmological parameters using a Fisher-Matrix formalism, we find we can make unbiased estimates of parameters for surveys up to 1,200 deg2 with 30 galaxies per arcmin2, beyond which the percent biases become larger than the statistical accuracy. This implies a flat-sky PCl analysis is accurate for current surveys but a Euclid-like survey will require higher accuracy.
A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs.
Wang, Siying; Pohl, Antje; Jaeschke, Timo; Czaplik, Michael; Köny, Marcus; Leonhardt, Steffen; Pohl, Nils
2015-01-01
In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO(2) measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations.
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Cone, Jamie A; Martin, Thomas M; Marcellin-Little, Denis J; Harrysson, Ola L A; Griffith, Emily H
2017-08-01
OBJECTIVE To assess the repeatability and accuracy of polymer replicas of small, medium, and large long bones of small animals fabricated by use of 2 low-end and 2 high-end 3-D printers. SAMPLE Polymer replicas of a cat femur, dog radius, and dog tibia were fabricated in triplicate by use of each of four 3-D printing methods. PROCEDURES 3-D renderings of the 3 bones reconstructed from CT images were prepared, and length, width of the proximal aspect, and width of the distal aspect of each CT image were measured in triplicate. Polymer replicas were fabricated by use of a high-end system that relied on jetting of curable liquid photopolymer, a high-end system that relied on polymer extrusion, a triple-nozzle polymer extrusion low-end system, and a dual-nozzle polymer extrusion low-end system. Polymer replicas were scanned by use of a laser-based coordinate measurement machine. Length, width of the proximal aspect, and width of the distal aspect of the scans of replicas were measured and compared with measurements for the 3-D renderings. RESULTS 129 measurements were collected for 34 replicas (fabrication of 1 large long-bone replica was unsuccessful on each of the 2 low-end printers). Replicas were highly repeatable for all 3-D printers. The 3-D printers overestimated dimensions of large replicas by approximately 1%. CONCLUSIONS AND CLINICAL RELEVANCE Low-end and high-end 3-D printers fabricated CT-derived replicas of bones of small animals with high repeatability. Replicas were slightly larger than the original bones.
High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.
Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok
2016-01-26
The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.
Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers
NASA Astrophysics Data System (ADS)
Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi
2018-03-01
Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Non-overlap subaperture interferometric testing for large optics
NASA Astrophysics Data System (ADS)
Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian
2017-08-01
It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng
2014-01-01
The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851
NASA Astrophysics Data System (ADS)
Zhang, Pengsong; Jiang, Shanping; Yang, Linhua; Zhang, Bolun
2018-01-01
In order to meet the requirement of high precision thermal distortion measurement foraΦ4.2m deployable mesh antenna of satellite in vacuum and cryogenic environment, based on Digital Close-range Photogrammetry and Space Environment Test Technology of Spacecraft, a large scale antenna distortion measurement system under vacuum and cryogenic environment is developed in this paper. The antenna Distortion measurement system (ADMS) is the first domestic independently developed thermal distortion measurement system for large antenna, which has successfully solved non-contact high precision distortion measurement problem in large spacecraft structure under vacuum and cryogenic environment. The measurement accuracy of ADMS is better than 50 μm/5m, which has reached international advanced level. The experimental results show that the measurement system has great advantages in large structural measurement of spacecrafts, and also has broad application prospects in space or other related fields.
NASA Astrophysics Data System (ADS)
Estes, L. D.; Debats, S. R.; Caylor, K. K.; Evans, T. P.; Gower, D.; McRitchie, D.; Searchinger, T.; Thompson, D. R.; Wood, E. F.; Zeng, L.
2016-12-01
In the coming decades, large areas of new cropland will be created to meet the world's rapidly growing food demands. Much of this new cropland will be in sub-Saharan Africa, where food needs will increase most and the area of remaining potential farmland is greatest. If we are to understand the impacts of global change, it is critical to accurately identify Africa's existing croplands and how they are changing. Yet the continent's smallholder-dominated agricultural systems are unusually challenging for remote sensing analyses, making accurate area estimates difficult to obtain, let alone important details related to field size and geometry. Fortunately, the rapidly growing archives of moderate to high-resolution satellite imagery hosted on open servers now offer an unprecedented opportunity to improve landcover maps. We present a system that integrates two critical components needed to capitalize on this opportunity: 1) human image interpretation and 2) machine learning (ML). Human judgment is needed to accurately delineate training sites within noisy imagery and a highly variable cover type, while ML provides the ability to scale and to interpret large feature spaces that defy human comprehension. Because large amounts of training data are needed (a major impediment for analysts), we use a crowdsourcing platform that connects amazon.com's Mechanical Turk service to satellite imagery hosted on open image servers. Workers map visible fields at pre-assigned sites, and are paid according to their mapping accuracy. Initial tests show overall high map accuracy and mapping rates >1800 km2/hour. The ML classifier uses random forests and randomized quasi-exhaustive feature selection, and is highly effective in classifying diverse agricultural types in southern Africa (AUC > 0.9). We connect the ML and crowdsourcing components to make an interactive learning framework. The ML algorithm performs an initial classification using a first batch of crowd-sourced maps, using thresholds of posterior probabilities to segregate sub-images classified with high or low confidence. Workers are then directed to collect new training data in low confidence sub-images, after which classification is repeated and re-assessed, and the entire process iterated until maximum possible accuracy is realized.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Reliable positioning in a sparse GPS network, eastern Ontario
NASA Astrophysics Data System (ADS)
Samadi Alinia, H.; Tiampo, K.; Atkinson, G. M.
2013-12-01
Canada hosts two regions that are prone to large earthquakes: western British Columbia, and the St. Lawrence River region in eastern Canada. Although eastern Ontario is not as seismically active as other areas of eastern Canada, such as the Charlevoix/Ottawa Valley seismic zone, it experiences ongoing moderate seismicity. In historic times, potentially damaging events have occurred in New York State (Attica, 1929, M=5.7; Plattsburg, 2002, M=5.0), north-central Ontario (Temiskaming, 1935, M=6.2; North Bay, 2000, M=5.0), eastern Ontario (Cornwall, 1944, M=5.8), Georgian Bay (2005, MN=4.3), and western Quebec (Val-Des-Bois,2010, M=5.0, MN=5.8). In eastern Canada, the analysis of detailed, high-precision measurements of surface deformation is a key component in our efforts to better characterize the associated seismic hazard. The data from precise, continuous GPS stations is necessary to adequately characterize surface velocities from which patterns and rates of stress accumulation on faults can be estimated (Mazzotti and Adams, 2005; Mazzotti et al., 2005). Monitoring of these displacements requires employing high accuracy GPS positioning techniques. Detailed strain measurements can determine whether the regional strain everywhere is commensurate with a large event occurring every few hundred years anywhere within this general area or whether large earthquakes are limited to specific areas (Adams and Halchuck, 2003; Mazzotti and Adams, 2005). In many parts of southeastern Ontario and western Québec, GPS stations are distributed quite sparsely, with spacings of approximately 100 km or more. The challenge is to provide accurate solutions for these sparse networks with an approach that is capable of achieving high-accuracy positioning. Here, various reduction techniques are applied to a sparse network installed with the Southern Ontario Seismic Network in eastern Ontario. Recent developments include the implementation of precise point positioning processing on acquired GPS raw data. These are based on precise GPS orbit and clock data products with centimeter accuracy computed beforehand. Here, the analysis of 1Hz GPS data is conducted in order to find the most reliable regional network from eight stations (STCO, TYNO, ACTO, INUQ, IVKQ, KLBO, MATQ and ALGO) that cover the study area in eastern Ontario. In this way, the estimated parameters are the total number of ambiguities and resolved ambiguities, posteriori rms of each baseline and the coordinates for each station and their differences with the known coordinates. The positioning accuracy, the corrections and the accuracy of interpolated corrections, and the initialization time required for precise positioning are presented for the various applications.
AVHRR channel selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Mapping land cover of large regions often requires processing of satellite images collected from several time periods at many spectral wavelength channels. However, manipulating and processing large amounts of image data increases the complexity and time, and hence the cost, that it takes to produce a land cover map. Very few studies have evaluated the importance of individual Advanced Very High Resolution Radiometer (AVHRR) channels for discriminating cover types, especially the thermal channels (channels 3, 4 and 5). Studies rarely perform a multi-year analysis to determine the impact of inter-annual variability on the classification results. We evaluated 5 years of AVHRR data using combinations of the original AVHRR spectral channels (1-5) to determine which channels are most important for cover type discrimination, yet stabilize inter-annual variability. Particular attention was placed on the channels in the thermal portion of the spectrum. Fourteen cover types over the entire state of Colorado were evaluated using a supervised classification approach on all two-, three-, four- and five-channel combinations for seven AVHRR biweekly composite datasets covering the entire growing season for each of 5 years. Results show that all three of the major portions of the electromagnetic spectrum represented by the AVHRR sensor are required to discriminate cover types effectively and stabilize inter-annual variability. Of the two-channel combinations, channels 1 (red visible) and 2 (near-infrared) had, by far, the highest average overall accuracy (72.2%), yet the inter-annual classification accuracies were highly variable. Including a thermal channel (channel 4) significantly increased the average overall classification accuracy by 5.5% and stabilized interannual variability. Each of the thermal channels gave similar classification accuracies; however, because of the problems in consistently interpreting channel 3 data, either channel 4 or 5 was found to be a more appropriate choice. Substituting the thermal channel with a single elevation layer resulted in equivalent classification accuracies and inter-annual variability.
Gillian, Jeffrey K.; Karl, Jason W.; Elaksher, Ahmed; Duniway, Michael C.
2017-01-01
Structure-from-motion (SfM) photogrammetry from unmanned aerial system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and finer spatial resolution compared to airborne laser scanning. Accuracy and precision of high-resolution digital terrain models (DTMs) derived from UAS imagery have been explored in many studies, typically by comparing image coordinates to surveyed check points or LiDAR datasets. In addition to traditional check points, this study compared 5 cm resolution DTMs derived from fixed-wing UAS imagery with a traditional ground-based method of measuring soil surface change called erosion bridges. We assessed accuracy by comparing the elevation values between DTMs and erosion bridges along thirty topographic transects each 6.1 m long. Comparisons occurred at two points in time (June 2014, February 2015) which enabled us to assess vertical accuracy with 3314 data points and vertical precision (i.e., repeatability) with 1657 data points. We found strong vertical agreement (accuracy) between the methods (RMSE 2.9 and 3.2 cm in June 2014 and February 2015, respectively) and high vertical precision for the DTMs (RMSE 2.8 cm). Our results from comparing SfM-generated DTMs to check points, and strong agreement with erosion bridge measurements suggests repeat UAS imagery and SfM processing could replace erosion bridges for a more synoptic landscape assessment of shifting soil surfaces for some studies. However, while collecting the UAS imagery and generating the SfM DTMs for this study was faster than collecting erosion bridge measurements, technical challenges related to the need for ground control networks and image processing requirements must be addressed before this technique could be applied effectively to large landscapes.
Because Trucks Aren't Bicycles: Orthographic Complexity as an Important Variable in Reading Research
ERIC Educational Resources Information Center
Galletly, Susan A.; Knight, Bruce Allen
2013-01-01
Severe enduring reading- and writing-accuracy difficulties seem a phenomenon largely restricted to nations using complex orthographies, notably Anglophone nations, given English's highly complex orthography (Geva and Siegel, "Read Writ" 12:1-30, 2000; Landerl et al., "Cognition" 63:315-334, 1997; Share, "Psychol Bul"l…
A Comparative Study of Teaching Typing Skills on Microcomputers.
ERIC Educational Resources Information Center
Lindsay, Robert M.
A 4-week experimental study was conducted with 105 high school students in 4 introductory typewriting classes of a large urban school in British Columbia during the 1981 spring semester. The purpose of the study was to compare the effectiveness of teaching the skill-building components of typewriting speed and accuracy using either the…
Human Information Processing and Supervisory Control.
1980-05-01
interpretation of information .............. 16 Sampling strategies .............................. 17 Speed-accuracy tradeoff ................... 23...operator is usually highly trained, and largely controls the tasks, being allowed to use what strategies he will.. Risk is incurred in ways which can...his search less than optimally effective. Hence from matters of tactics and strategy which will be discussed below, straightforward questions of
NASA Astrophysics Data System (ADS)
Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.
2014-03-01
A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.
NASA Astrophysics Data System (ADS)
Khawaja, U. Al; Al-Refai, M.; Shchedrin, Gavriil; Carr, Lincoln D.
2018-06-01
Fractional nonlinear differential equations present an interplay between two common and important effective descriptions used to simplify high dimensional or more complicated theories: nonlinearity and fractional derivatives. These effective descriptions thus appear commonly in physical and mathematical modeling. We present a new series method providing systematic controlled accuracy for solutions of fractional nonlinear differential equations, including the fractional nonlinear Schrödinger equation and the fractional nonlinear diffusion equation. The method relies on spatially iterative use of power series expansions. Our approach permits an arbitrarily large radius of convergence and thus solves the typical divergence problem endemic to power series approaches. In the specific case of the fractional nonlinear Schrödinger equation we find fractional generalizations of cnoidal waves of Jacobi elliptic functions as well as a fractional bright soliton. For the fractional nonlinear diffusion equation we find the combination of fractional and nonlinear effects results in a more strongly localized solution which nevertheless still exhibits power law tails, albeit at a much lower density.
Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar
Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le
2016-01-01
Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058
NASA Astrophysics Data System (ADS)
Tancredi, U.; Renga, A.; Grassi, M.
2013-05-01
This paper describes a carrier-phase differential GPS approach for real-time relative navigation of LEO satellites flying in formation with large separations. These applications are characterized indeed by a highly varying number of GPS satellites in common view and large ionospheric differential errors, which significantly impact relative navigation performance and robustness. To achieve high relative positioning accuracy a navigation algorithm is proposed which processes double-difference code and carrier measurements on two frequencies, to fully exploit the integer nature of the related ambiguities. Specifically, a closed-loop scheme is proposed in which fixed estimates of the baseline and integer ambiguities produced by means of a partial integer fixing step are fed back to an Extended Kalman Filter for improving the float estimate at successive time instants. The approach also benefits from the inclusion in the filter state of the differential ionospheric delay in terms of the Vertical Total Electron Content of each satellite. The navigation algorithm performance is tested on actual flight data from GRACE mission. Results demonstrate the effectiveness of the proposed approach in managing integer unknowns in conjunction with Extended Kalman Filtering, and that centimeter-level accuracy can be achieved in real-time also with large separations.
Goulart, Alessandra Carvalho; Oliveira, Ilka Regina Souza de; Alencar, Airlane Pereira; Santos, Maira Solange Camara dos; Santos, Itamar Souza; Martines, Brenda Margatho Ramos; Meireles, Danilo Peron; Martines, João Augusto dos Santos; Misciagna, Giovanni; Benseñor, Isabela Martins; Lotufo, Paulo Andrade
2015-01-01
Noninvasive strategies for evaluating non-alcoholic fatty liver disease (NAFLD) have been investigated over the last few decades. Our aim was to evaluate the diagnostic accuracy of a new hepatic ultrasound score for NAFLD in the ELSA-Brasil study. Diagnostic accuracy study conducted in the ELSA center, in the hospital of a public university. Among the 15,105 participants of the ELSA study who were evaluated for NAFLD, 195 individuals were included in this sub-study. Hepatic ultrasound was performed (deep beam attenuation, hepatorenal index and anteroposterior diameter of the right hepatic lobe) and compared with the hepatic steatosis findings from 64-channel high-resolution computed tomography (CT). We also evaluated two clinical indices relating to NAFLD: the fatty liver index (FLI) and the hepatic steatosis index (HSI). Among the 195 participants, the NAFLD frequency was 34.4%. High body mass index, high waist circumference, diabetes and hypertriglyceridemia were associated with high hepatic attenuation and large anteroposterior diameter of the right hepatic lobe, but not with the hepatorenal index. The hepatic ultrasound score, based on hepatic attenuation and the anteroposterior diameter of the right hepatic lobe, presented the best performance for NAFLD screening at the cutoff point ≥ 1 point; sensitivity: 85.1%; specificity: 73.4%; accuracy: 79.3%; and area under the curve (AUC 0.85; 95% confidence interval, CI: 0.78-0.91)]. FLI and HSI presented lower performance (AUC 0.76; 95% CI: 0.69-0.83) than CT. The hepatic ultrasound score based on hepatic attenuation and the anteroposterior diameter of the right hepatic lobe has good reproducibility and accuracy for NAFLD screening.
Jiemy, William Febry; Heeringa, Peter; Kamps, Jan A A M; van der Laken, Conny J; Slart, Riemer H J A; Brouwer, Elisabeth
2018-05-03
Macrophages are key players in the pathogenesis of large-vessel vasculitis (LVV) and may serve as a target for diagnostic imaging of LVV. The radiotracer, 18 F-FDG has proven to be useful in the diagnosis of giant cell arteritis (GCA), a form of LVV. Although uptake of 18 F-FDG is high in activated macrophages, it is not a specific radiotracer as its uptake is high in any proliferating cell and other activated immune cells resulting in high non-specific background radioactivity especially in aging and atherosclerotic vessels which dramatically lowers the diagnostic accuracy. Evidence also exists that the sensitivity of 18 F-FDG PET drops in patients upon glucocorticoid treatment. Therefore, there is a clinical need for more specific radiotracers in imaging GCA to improve diagnostic accuracy. Numerous clinically established and newly developed macrophage targeted radiotracers for oncological and inflammatory diseases can potentially be utilized for LVV imaging. These tracers are more target specific and therefore may provide lower background radioactivity, higher diagnostic accuracy and the ability to assess treatment effectiveness. However, current knowledge regarding macrophage subsets in LVV lesions is limited. Further understanding regarding macrophage subsets in vasculitis lesion is needed for better selection of tracers and new targets for tracer development. This review summarizes the development of macrophage targeted tracers in the last decade and the potential application of macrophage targeted tracers currently used in other inflammatory diseases in imaging LVV. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
[Advances in the research of application of artificial intelligence in burn field].
Li, H H; Bao, Z X; Liu, X B; Zhu, S H
2018-04-20
Artificial intelligence has been able to automatically learn and judge large-scale data to some extent. Based on database of a large amount of burn data and in-depth learning, artificial intelligence can assist burn surgeons to evaluate burn surface, diagnose burn depth, guide fluid supply during shock stage, and predict prognosis, with high accuracy. With the development of technology, artificial intelligence can provide more accurate information for burn surgeons to make clinical diagnosis and treatment strategies.
Weber, K L; Thallman, R M; Keele, J W; Snelling, W M; Bennett, G L; Smith, T P L; McDaneld, T G; Allan, M F; Van Eenennaam, A L; Kuehn, L A
2012-12-01
Genomic selection involves the assessment of genetic merit through prediction equations that allocate genetic variation with dense marker genotypes. It has the potential to provide accurate breeding values for selection candidates at an early age and facilitate selection for expensive or difficult to measure traits. Accurate across-breed prediction would allow genomic selection to be applied on a larger scale in the beef industry, but the limited availability of large populations for the development of prediction equations has delayed researchers from providing genomic predictions that are accurate across multiple beef breeds. In this study, the accuracy of genomic predictions for 6 growth and carcass traits were derived and evaluated using 2 multibreed beef cattle populations: 3,358 crossbred cattle of the U.S. Meat Animal Research Center Germplasm Evaluation Program (USMARC_GPE) and 1,834 high accuracy bull sires of the 2,000 Bull Project (2000_BULL) representing influential breeds in the U.S. beef cattle industry. The 2000_BULL EPD were deregressed, scaled, and weighted to adjust for between- and within-breed heterogeneous variance before use in training and validation. Molecular breeding values (MBV) trained in each multibreed population and in Angus and Hereford purebred sires of 2000_BULL were derived using the GenSel BayesCπ function (Fernando and Garrick, 2009) and cross-validated. Less than 10% of large effect loci were shared between prediction equations trained on (USMARC_GPE) relative to 2000_BULL although locus effects were moderately to highly correlated for most traits and the traits themselves were highly correlated between populations. Prediction of MBV accuracy was low and variable between populations. For growth traits, MBV accounted for up to 18% of genetic variation in a pooled, multibreed analysis and up to 28% in single breeds. For carcass traits, MBV explained up to 8% of genetic variation in a pooled, multibreed analysis and up to 42% in single breeds. Prediction equations trained in multibreed populations were more accurate for Angus and Hereford subpopulations because those were the breeds most highly represented in the training populations. Accuracies were less for prediction equations trained in a single breed due to the smaller number of records derived from a single breed in the training populations.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
NASA Astrophysics Data System (ADS)
Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2014-11-01
Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A.
2015-01-01
Background Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Methods Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). Results For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. Conclusions This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions. PMID:26132293
Amen, Daniel G; Raji, Cyrus A; Willeumier, Kristen; Taylor, Derek; Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A
2015-01-01
Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions.
Kassamali, Rahil Hussein; Hoey, Edward T D; Ganeshan, Arul; Littlehales, Tracey
2013-01-01
This feasibility study aimed to obtain initial data to assess the performance of a novel noncontrast spoiled magnetic resonance (MR) angiography technique (fresh-blood imaging [FBI]) compared to gadolinium-enhanced MR (Gd-MR) angiography for evaluation of the aorto-iliac and lower extremity arteries. Thirteen patients with suspected lower extremity arterial disease that had undergone Gd-MR angiography and FBI at the same session were randomly included in the study. FBI was performed using an ECG-gated ow-spoiled T2-weighted half-Fourier fast spin-echo sequence. For analysis, the aortoiliac and lower limb arteries were divided into 18 anatomical segments. Two blinded readers individually graded image quality of FBI and also assessed the presence and severity of any stenotic lesions. A similar analysis was performed for the Gd-MR angiography images. A total of 385 arterial segments were analyzed; 34 segments were excluded due to degraded image quality (1.3% of Gd- MR vs. 8% of FBI-MR angiography images). FBI-MR angiography had comparable accuracy to Gd-MR angiography for assessment of the above knee vessels with high kappa statistics (large arteries, 0.91; small arteries, 0.86) and high sensitivity (large arteries, 98.1%; small arteries, 88.6%) and specificity (large arteries, 97.2%; small arteries, 97.6%) using Gd-MR angiography as the gold standard. Initial results show good agreement between FBI-MR angiography and Gd-MR angiography in the diagnosis of peripheral arterial disease, making FBI a potential alternative in patients with renal impairment. FBI showed highest accuracy in the above knee vessels. Technological refinements are required to improve accuracy for assessing the calf and pedal vessels.
Electrohydraulic Synchronizing Servo Control of a Robotic Arm
NASA Astrophysics Data System (ADS)
Li, S.; Ruan, J.; Pei, X.; Yu, Z. Q.; Zhu, F. M.
2006-10-01
The large robotic arm is usually driven by the electrodraulic synchronizing control system. The electrodraulic synchronizing system is designed with the digital valve to eliminate the effect of the nonlinearities, such as hysteresis, saturation, definite resolution. The working principle of the electrodraulic synchronizing control system is introduced and the mathematical model is established through construction of flow rate equation, continuity equation, force equilibrium equation, etc. To obtain the high accuracy, the PID control is introduced in the system. Simulation analysis shows that the dynamic performance of the synchronizing system is good, and its steady state error is very small. To validate the results, the experimental set-up of the synchronizing system is built. The experiment makes it clear that the control system has high accuracy. The synchronizing system can be applied widely in practice.
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
Simple chained guide trees give high-quality protein multiple sequence alignments
Boyce, Kieran; Sievers, Fabian; Higgins, Desmond G.
2014-01-01
Guide trees are used to decide the order of sequence alignment in the progressive multiple sequence alignment heuristic. These guide trees are often the limiting factor in making large alignments, and considerable effort has been expended over the years in making these quickly or accurately. In this article we show that, at least for protein families with large numbers of sequences that can be benchmarked with known structures, simple chained guide trees give the most accurate alignments. These also happen to be the fastest and simplest guide trees to construct, computationally. Such guide trees have a striking effect on the accuracy of alignments produced by some of the most widely used alignment packages. There is a marked increase in accuracy and a marked decrease in computational time, once the number of sequences goes much above a few hundred. This is true, even if the order of sequences in the guide tree is random. PMID:25002495
Testing the accuracy of timing reports in visual timing tasks with a consumer-grade digital camera.
Smyth, Rachael E; Oram Cardy, Janis; Purcell, David
2017-06-01
This study tested the accuracy of a visual timing task using a readily available and relatively inexpensive consumer grade digital camera. A visual inspection time task was recorded using short high-speed video clips and the timing as reported by the task's program was compared to the timing as recorded in the video clips. Discrepancies in these two timing reports were investigated further and based on display refresh rate, a decision was made whether the discrepancy was large enough to affect the results as reported by the task. In this particular study, the errors in timing were not large enough to impact the results of the study. The procedure presented in this article offers an alternative method for performing a timing test, which uses readily available hardware and can be used to test the timing in any software program on any operating system and display.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
Using pan-sharpened high resolution satellite data to improve impervious surfaces estimation
NASA Astrophysics Data System (ADS)
Xu, Ru; Zhang, Hongsheng; Wang, Ting; Lin, Hui
2017-05-01
Impervious surface is an important environmental and socio-economic indicator for numerous urban studies. While a large number of researches have been conducted to estimate the area and distribution of impervious surface from satellite data, the accuracy for impervious surface estimation (ISE) is insufficient due to high diversity of urban land cover types. This study evaluated the use of panchromatic (PAN) data in very high resolution satellite image for improving the accuracy of ISE by various pan-sharpening approaches, with a further comprehensive analysis of its scale effects. Three benchmark pan-sharpening approaches, Gram-Schmidt (GS), PANSHARP and principal component analysis (PCA) were applied to WorldView-2 in three spots of Hong Kong. The on-screen digitization were carried out based on Google Map and the results were viewed as referenced impervious surfaces. The referenced impervious surfaces and the ISE results were then re-scaled to various spatial resolutions to obtain the percentage of impervious surfaces. The correlation coefficient (CC) and root mean square error (RMSE) were adopted as the quantitative indicator to assess the accuracy. The accuracy differences between three research areas were further illustrated by the average local variance (ALV) which was used for landscape pattern analysis. The experimental results suggested that 1) three research regions have various landscape patterns; 2) ISE accuracy extracted from pan-sharpened data was better than ISE from original multispectral (MS) data; and 3) this improvement has a noticeable scale effects with various resolutions. The improvement was reduced slightly as the resolution became coarser.
Joint genomic evaluation of French dairy cattle breeds using multiple-trait models.
Karoui, Sofiene; Carabaño, María Jesús; Díaz, Clara; Legarra, Andrés
2012-12-07
Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (r(g)): uncorrelated (1), r(g) = 0; estimated r(g) (2); high, r(g) = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R(2)) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from -0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R(2) between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait.
Joint genomic evaluation of French dairy cattle breeds using multiple-trait models
2012-01-01
Background Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Methods Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (rg): uncorrelated (1), rg = 0; estimated rg (2); high, rg = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R2) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. Results The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from −0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R2 between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Conclusions Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait. PMID:23216664
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Gravity compensation in a Strapdown Inertial Navigation System to improve the attitude accuracy
NASA Astrophysics Data System (ADS)
Zhu, Jing; Wang, Jun; Wang, Xingshu; Yang, Shuai
2017-10-01
Attitude errors in a strapdown inertial navigation system due to gravity disturbances and system noises can be relatively large, although they are bound within the Schuler and the Earth rotation period. The principal objective of the investigation is to determine to what extent accurate gravity data can improve the attitude accuracy. The way the gravity disturbances affect the attitude were analyzed and compared with system noises by the analytic solution and simulation. The gravity disturbances affect the attitude accuracy by introducing the initial attitude error and the equivalent accelerometer bias. With the development of the high precision inertial devices and the application of the rotation modulation technology, the gravity disturbance cannot be neglected anymore. The gravity compensation was performed using the EGM2008 and simulations with and without accurate gravity compensation under varying navigation conditions were carried out. The results show that the gravity compensation improves the horizontal components of attitude accuracy evidently while the yaw angle is badly affected by the uncompensated gyro bias in vertical channel.
Accurate force field for molybdenum by machine learning large materials data
NASA Astrophysics Data System (ADS)
Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping
2017-09-01
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.
Acoustic localization at large scales: a promising method for grey wolf monitoring.
Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle
2018-01-01
The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
NASA Astrophysics Data System (ADS)
Shi, C.; Gebert, F.; Gorges, C.; Kaufmann, S.; Nörtershäuser, W.; Sahoo, B. K.; Surzhykov, A.; Yerokhin, V. A.; Berengut, J. C.; Wolf, F.; Heip, J. C.; Schmidt, P. O.
2017-01-01
We measured the isotope shift in the ^2{S}_{{1}/{2}} → ^2{P}_{{3}/{2}} (D2) transition in singly ionized calcium ions using photon recoil spectroscopy. The high accuracy of the technique enables us to compare the difference between the isotope shifts of this transition to the previously measured isotopic shifts of the ^2{S}_{{1}/{2}} → ^2{P}_{{1}/{2}} (D1) line. This so-called splitting isotope shift is extracted and exhibits a clear signature of field shift contributions. From the data, we were able to extract the small difference of the field shift coefficient and mass shifts between the two transitions with high accuracy. This J-dependence is of relativistic origin and can be used to benchmark atomic structure calculations. As a first step, we use several ab initio atomic structure calculation methods to provide more accurate values for the field shift constants and their ratio. Remarkably, the high-accuracy value for the ratio of the field shift constants extracted from the experimental data is larger than all available theoretical predictions.
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
Capers, Patrice L.; Brown, Andrew W.; Dawson, John A.; Allison, David B.
2015-01-01
Background: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing) has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. Objective: To evaluate the use of double sampling combined with multiple imputation (DS + MI) to address meta-research questions, using as an example adherence of PubMed entries to two simple consolidated standards of reporting trials guidelines for titles and abstracts. Methods: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT, human, abstract available, and English language (n = 322, 107). For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI) method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO) human rating method. Multiple imputation of the missing-completely at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. Results: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title = 1.00, abstract = 0.92). Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS + MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by year: subsample RHITLO 1.050–1.174 vs. DS + MI 1.082–1.151). As evidence of improved accuracy, DS + MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. Conclusion: Our results support our hypothesis that DS + MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of literature. PMID:25988135
Illusory expectations can affect retrieval-monitoring accuracy.
McDonough, Ian M; Gallo, David A
2012-03-01
The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved
The utility of low-density genotyping for imputation in the Thoroughbred horse
2014-01-01
Background Despite the dramatic reduction in the cost of high-density genotyping that has occurred over the last decade, it remains one of the limiting factors for obtaining the large datasets required for genomic studies of disease in the horse. In this study, we investigated the potential for low-density genotyping and subsequent imputation to address this problem. Results Using the haplotype phasing and imputation program, BEAGLE, it is possible to impute genotypes from low- to high-density (50K) in the Thoroughbred horse with reasonable to high accuracy. Analysis of the sources of variation in imputation accuracy revealed dependence both on the minor allele frequency of the single nucleotide polymorphisms (SNPs) being imputed and on the underlying linkage disequilibrium structure. Whereas equidistant spacing of the SNPs on the low-density panel worked well, optimising SNP selection to increase their minor allele frequency was advantageous, even when the panel was subsequently used in a population of different geographical origin. Replacing base pair position with linkage disequilibrium map distance reduced the variation in imputation accuracy across SNPs. Whereas a 1K SNP panel was generally sufficient to ensure that more than 80% of genotypes were correctly imputed, other studies suggest that a 2K to 3K panel is more efficient to minimize the subsequent loss of accuracy in genomic prediction analyses. The relationship between accuracy and genotyping costs for the different low-density panels, suggests that a 2K SNP panel would represent good value for money. Conclusions Low-density genotyping with a 2K SNP panel followed by imputation provides a compromise between cost and accuracy that could promote more widespread genotyping, and hence the use of genomic information in horses. In addition to offering a low cost alternative to high-density genotyping, imputation provides a means to combine datasets from different genotyping platforms, which is becoming necessary since researchers are starting to use the recently developed equine 70K SNP chip. However, more work is needed to evaluate the impact of between-breed differences on imputation accuracy. PMID:24495673
Error-proneness as a handicap signal.
De Jaegher, Kris
2003-09-21
This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
High-Accuracy Measurements of the Centre of Gravity of Avalanches in Proportional Chambers
DOE R&D Accomplishments Database
Charpak, G.; Jeavons, A.; Sauli, F.; Stubbs, R.
1973-09-24
In a multiwire proportional chamber the avalanches occur close to the anode wires. The motion of the positive ions in the large electric fields at the vicinity of the wires induces fast-rising positive pulses on the surrounding electrodes. Different methods have been developed in order to determine the position of the centre of the avalanches. In the method we describe, the centre of gravity of the pulse distribution is measured directly. It seems to lead to an accuracy which is limited only by the stability of the spatial distribution of the avalanches generated by the process being measured.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradel, Lauren; Endert, Alexander; Koch, Kristen
2013-08-01
Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional textual intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the space management strategies of users partitioned by type of tool philosophy followed (visualization- or text-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with information on the display (integrated or independent workspaces). Next,more » we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we offer design suggestions for building future co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays.« less
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
2004-01-01
A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.
Forest type mapping of the Interior West
Bonnie Ruefenacht; Gretchen G. Moisen; Jock A. Blackard
2004-01-01
This paper develops techniques for the mapping of forest types in Arizona, New Mexico, and Wyoming. The methods involve regression-tree modeling using a variety of remote sensing and GIS layers along with Forest Inventory Analysis (FIA) point data. Regression-tree modeling is a fast and efficient technique of estimating variables for large data sets with high accuracy...
iXora: exact haplotype inferencing and trait association.
Utro, Filippo; Haiminen, Niina; Livingstone, Donald; Cornejo, Omar E; Royaert, Stefan; Schnell, Raymond J; Motamayor, Juan Carlos; Kuhn, David N; Parida, Laxmi
2013-06-06
We address the task of extracting accurate haplotypes from genotype data of individuals of large F1 populations for mapping studies. While methods for inferring parental haplotype assignments on large F1 populations exist in theory, these approaches do not work in practice at high levels of accuracy. We have designed iXora (Identifying crossovers and recombining alleles), a robust method for extracting reliable haplotypes of a mapping population, as well as parental haplotypes, that runs in linear time. Each allele in the progeny is assigned not just to a parent, but more precisely to a haplotype inherited from the parent. iXora shows an improvement of at least 15% in accuracy over similar systems in literature. Furthermore, iXora provides an easy-to-use, comprehensive environment for association studies and hypothesis checking in populations of related individuals. iXora provides detailed resolution in parental inheritance, along with the capability of handling very large populations, which allows for accurate haplotype extraction and trait association. iXora is available for non-commercial use from http://researcher.ibm.com/project/3430.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Image Tiling for Profiling Large Objects
NASA Technical Reports Server (NTRS)
Venkataraman, Ajit; Schock, Harold; Mercer, Carolyn R.
1992-01-01
Three dimensional surface measurements of large objects arc required in a variety of industrial processes. The nature of these measurements is changing as optical instruments arc beginning to replace conventional contact probes scanned over the objects. A common characteristic of the optical surface profilers is the trade off between measurement accuracy and field of view. In order to measure a large object with high accuracy, multiple views arc required. An accurate transformation between the different views is needed to bring about their registration. In this paper, we demonstrate how the transformation parameters can be obtained precisely by choosing control points which lie in the overlapping regions of the images. A good starting point for the transformation parameters is obtained by having a knowledge of the scanner position. The selection of the control points arc independent of the object geometry. By successively recording multiple views and obtaining transformation with respect to a single coordinate system, a complete physical model of an object can be obtained. Since all data arc in the same coordinate system, it can thus be used for building automatic models for free form surfaces.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle
2018-01-01
Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.
Remote Sensing Applications with High Reliability in Changjiang Water Resource Management
NASA Astrophysics Data System (ADS)
Ma, L.; Gao, S.; Yang, A.
2018-04-01
Remote sensing technology has been widely used in many fields. But most of the applications cannot get the information with high reliability and high accuracy in large scale, especially for the applications using automatic interpretation methods. We have designed an application-oriented technology system (PIR) composed of a series of accurate interpretation techniques,which can get over 85 % correctness in Water Resource Management from the view of photogrammetry and expert knowledge. The techniques compose of the spatial positioning techniques from the view of photogrammetry, the feature interpretation techniques from the view of expert knowledge, and the rationality analysis techniques from the view of data mining. Each interpreted polygon is accurate enough to be applied to the accuracy sensitive projects, such as the Three Gorge Project and the South - to - North Water Diversion Project. In this paper, we present several remote sensing applications with high reliability in Changjiang Water Resource Management,including water pollution investigation, illegal construction inspection, and water conservation monitoring, etc.
FPGA-Based Smart Sensor for Online Displacement Measurements Using a Heterodyne Interferometer
Vera-Salas, Luis Alberto; Moreno-Tapia, Sandra Veronica; Garcia-Perez, Arturo; de Jesus Romero-Troncoso, Rene; Osornio-Rios, Roque Alfredo; Serroukh, Ibrahim; Cabal-Yepez, Eduardo
2011-01-01
The measurement of small displacements on the nanometric scale demands metrological systems of high accuracy and precision. In this context, interferometer-based displacement measurements have become the main tools used for traceable dimensional metrology. The different industrial applications in which small displacement measurements are employed requires the use of online measurements, high speed processes, open architecture control systems, as well as good adaptability to specific process conditions. The main contribution of this work is the development of a smart sensor for large displacement measurement based on phase measurement which achieves high accuracy and resolution, designed to be used with a commercial heterodyne interferometer. The system is based on a low-cost Field Programmable Gate Array (FPGA) allowing the integration of several functions in a single portable device. This system is optimal for high speed applications where online measurement is needed and the reconfigurability feature allows the addition of different modules for error compensation, as might be required by a specific application. PMID:22164040
Further experiments for mean velocity profile of pipe flow at high Reynolds number
NASA Astrophysics Data System (ADS)
Furuichi, N.; Terao, Y.; Wada, Y.; Tsuji, Y.
2018-05-01
This paper reports further experimental results obtained in high Reynolds number actual flow facility in Japan. The experiments were performed in a pipe flow with water, and the friction Reynolds number was varied up to Reτ = 5.3 × 104. This high Reynolds number was achieved by using water as the working fluid and adopting a large-diameter pipe (387 mm) while controlling the flow rate and temperature with high accuracy and precision. The streamwise velocity was measured by laser Doppler velocimetry close to the wall, and the mean velocity profile, called log-law profile U+ = (1/κ) ln(y+) + B, is especially focused. After careful verification of the mean velocity profiles in terms of the flow rate accuracy and an evaluation of the consistency of the present results with those from previously measurements in a smaller pipe (100 mm), it was found that the value of κ asymptotically approaches a constant value of κ = 0.384.
NASA Astrophysics Data System (ADS)
Liu, Chao; Yang, Guigeng; Zhang, Yiqun
2015-01-01
The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.
Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung
2015-01-01
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands. PMID:26184214
Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung
2015-07-13
Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.
Diagnostic Accuracy of Obstructive Airway Adult Test for Diagnosis of Obstructive Sleep Apnea.
Gasparini, Giulio; Vicini, Claudio; De Benedetto, Michele; Salamanca, Fabrizio; Sorrenti, Giovanni; Romandini, Mario; Bosi, Marcello; Saponaro, Gianmarco; Foresta, Enrico; Laforì, Andreina; Meccariello, Giuseppe; Bianchi, Alessandro; Toraldo, Domenico Maurizio; Campanini, Aldo; Montevecchi, Filippo; Rizzotto, Grazia; Cervelli, Daniele; Moro, Alessandro; Arigliani, Michele; Gobbi, Riccardo; Pelo, Sandro
2015-01-01
The gold standard for the diagnosis of Obstructive Sleep Apnea (OSA) is polysomnography, whose access is however reduced by costs and limited availability, so that additional diagnostic tests are needed. To analyze the diagnostic accuracy of the Obstructive Airway Adult Test (OAAT) compared to polysomnography for the diagnosis of OSA in adult patients. Ninety patients affected by OSA verified with polysomnography (AHI ≥ 5) and ten healthy patients, randomly selected, were included and all were interviewed by one blind examiner with OAAT questions. The Spearman rho, evaluated to measure the correlation between OAAT and polysomnography, was 0.72 (p < 0.01). The area under the ROC curve (95% CI) was the parameter to evaluate the accuracy of the OAAT: it was 0.91 (0.81-1.00) for the diagnosis of OSA (AHI ≥ 5), 0.90 (0.82-0.98) for moderate OSA (AHI ≥ 15), and 0.84 (0.76-0.92) for severe OSA (AHI ≥ 30). The OAAT has shown a high correlation with polysomnography and also a high diagnostic accuracy for the diagnosis of OSA. It has also been shown to be able to discriminate among the different degrees of severity of OSA. Additional large studies aiming to validate this questionnaire as a screening or diagnostic test are needed.
Inventory and analysis of rangeland resources of the state land block on Parker Mountain, Utah
NASA Technical Reports Server (NTRS)
Jaynes, R. A. (Principal Investigator)
1983-01-01
High altitude color infrared (CIR) photography was interpreted to provide an 1:24,000 overlay to U.S.G.S. topographic maps. The inventory and analysis of rangeland resources was augmented by the digital analysis of LANDSAT MSS data. Available geology, soils, and precipitation maps were used to sort out areas of confusion on the CIR photography. The map overlay from photo interpretation was also prepared with reference to print maps developed from LANDSAT MSS data. The resulting map overlay has a high degree of interpretive and spatial accuracy. An unacceptable level of confusion between the several sagebrush types in the MSS mapping was largely corrected by introducing ancillary data. Boundaries from geology, soils, and precipitation maps, as well as field observations, were digitized and pixel classes were adjusted according to the location of pixels with particular spectral signatures with respect to such boundaries. The resulting map, with six major cover classes, has an overall accuracy of 89%. Overall accuracy was 74% when these six classes were expanded to 20 classes.
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
Statistical algorithms improve accuracy of gene fusion detection
Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro
2017-01-01
Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529
Elevation correction factor for absolute pressure measurements
NASA Technical Reports Server (NTRS)
Panek, Joseph W.; Sorrells, Mark R.
1996-01-01
With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays
Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok
2016-01-01
The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart. PMID:26821029
Wright, A; McCoy, A; Henkin, S; Flaherty, M; Sittig, D
2013-01-01
In a prior study, we developed methods for automatically identifying associations between medications and problems using association rule mining on a large clinical data warehouse and validated these methods at a single site which used a self-developed electronic health record. To demonstrate the generalizability of these methods by validating them at an external site. We received data on medications and problems for 263,597 patients from the University of Texas Health Science Center at Houston Faculty Practice, an ambulatory practice that uses the Allscripts Enterprise commercial electronic health record product. We then conducted association rule mining to identify associated pairs of medications and problems and characterized these associations with five measures of interestingness: support, confidence, chi-square, interest and conviction and compared the top-ranked pairs to a gold standard. 25,088 medication-problem pairs were identified that exceeded our confidence and support thresholds. An analysis of the top 500 pairs according to each measure of interestingness showed a high degree of accuracy for highly-ranked pairs. The same technique was successfully employed at the University of Texas and accuracy was comparable to our previous results. Top associations included many medications that are highly specific for a particular problem as well as a large number of common, accurate medication-problem pairs that reflect practice patterns.
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Stenz, Ulrich; Hartmann, Jens; Paffenholz, Jens-André; Neumann, Ingo
2017-08-16
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less.
Demodulation algorithm for optical fiber F-P sensor.
Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan
2017-09-10
The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5 nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.
NASA Technical Reports Server (NTRS)
Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir
2006-01-01
Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Hong, J.; Zhang, J.
2013-12-15
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.
Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y
2013-12-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.
Pembleton, Luke W; Inch, Courtney; Baillie, Rebecca C; Drayton, Michelle C; Thakur, Preeti; Ogaji, Yvonne O; Spangenberg, German C; Forster, John W; Daetwyler, Hans D; Cogan, Noel O I
2018-06-02
Exploitation of data from a ryegrass breeding program has enabled rapid development and implementation of genomic selection for sward-based biomass yield with a twofold-to-threefold increase in genetic gain. Genomic selection, which uses genome-wide sequence polymorphism data and quantitative genetics techniques to predict plant performance, has large potential for the improvement in pasture plants. Major factors influencing the accuracy of genomic selection include the size of reference populations, trait heritability values and the genetic diversity of breeding populations. Global diversity of the important forage species perennial ryegrass is high and so would require a large reference population in order to achieve moderate accuracies of genomic selection. However, diversity of germplasm within a breeding program is likely to be lower. In addition, de novo construction and characterisation of reference populations are a logistically complex process. Consequently, historical phenotypic records for seasonal biomass yield and heading date over a 18-year period within a commercial perennial ryegrass breeding program have been accessed, and target populations have been characterised with a high-density transcriptome-based genotyping-by-sequencing assay. Ability to predict observed phenotypic performance in each successive year was assessed by using all synthetic populations from previous years as a reference population. Moderate and high accuracies were achieved for the two traits, respectively, consistent with broad-sense heritability values. The present study represents the first demonstration and validation of genomic selection for seasonal biomass yield within a diverse commercial breeding program across multiple years. These results, supported by previous simulation studies, demonstrate the ability to predict sward-based phenotypic performance early in the process of individual plant selection, so shortening the breeding cycle, increasing the rate of genetic gain and allowing rapid adoption in ryegrass improvement programs.
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
An information-theoretic approach to designing the plane spacing for multifocal plane microscopy
Tahmasbi, Amir; Ram, Sripad; Chao, Jerry; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2015-01-01
Multifocal plane microscopy (MUM) is a 3D imaging modality which enables the localization and tracking of single molecules at high spatial and temporal resolution by simultaneously imaging distinct focal planes within the sample. MUM overcomes the depth discrimination problem of conventional microscopy and allows high accuracy localization of a single molecule in 3D along the z-axis. An important question in the design of MUM experiments concerns the appropriate number of focal planes and their spacings to achieve the best possible 3D localization accuracy along the z-axis. Ideally, it is desired to obtain a 3D localization accuracy that is uniform over a large depth and has small numerical values, which guarantee that the single molecule is continuously detectable. Here, we address this concern by developing a plane spacing design strategy based on the Fisher information. In particular, we analyze the Fisher information matrix for the 3D localization problem along the z-axis and propose spacing scenarios termed the strong coupling and the weak coupling spacings, which provide appropriate 3D localization accuracies. Using these spacing scenarios, we investigate the detectability of the single molecule along the z-axis and study the effect of changing the number of focal planes on the 3D localization accuracy. We further review a software module we recently introduced, the MUMDesignTool, that helps to design the plane spacings for a MUM setup. PMID:26113764
Precise determination of neutron binding energy of 64Cu
NASA Astrophysics Data System (ADS)
Telezhnikov, S. A.; Granja, C.; Honzatko, J.; Pospisil, S.; Tomandl, I.
2016-05-01
The neutron binding energy in 64Cu has been accurately measured in thermal neutron capture. A composite target of natural Cu and NaCl was used on a high flux neutron beam using a large measuring time. The γ-ray spectrum emitted in the ( n, γ) reaction was measured with a HPGe detector in large statistics (up to 106 events per channel). Intrinsic limitations of HPGe detectors, which restrict the accuracy of energy calibration, were determined. The value B n of 64Cu was determined as 7915.867(24) keV.
Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers
NASA Technical Reports Server (NTRS)
Kennedy, Christopher A.; Carpenter, Mark H.
1997-01-01
An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-06-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection.
Zhao, Y; Mette, M F; Gowda, M; Longin, C F H; Reif, J C
2014-01-01
Based on data from field trials with a large collection of 135 elite winter wheat inbred lines and 1604 F1 hybrids derived from them, we compared the accuracy of prediction of marker-assisted selection and current genomic selection approaches for the model traits heading time and plant height in a cross-validation approach. For heading time, the high accuracy seen with marker-assisted selection severely dropped with genomic selection approaches RR-BLUP (ridge regression best linear unbiased prediction) and BayesCπ, whereas for plant height, accuracy was low with marker-assisted selection as well as RR-BLUP and BayesCπ. Differences in the linkage disequilibrium structure of the functional and single-nucleotide polymorphism markers relevant for the two traits were identified in a simulation study as a likely explanation for the different trends in accuracies of prediction. A new genomic selection approach, weighted best linear unbiased prediction (W-BLUP), designed to treat the effects of known functional markers more appropriately, proved to increase the accuracy of prediction for both traits and thus closes the gap between marker-assisted and genomic selection. PMID:24518889
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams
Bennamoun, Mohammed
2014-01-01
In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses. PMID:24817888
Diode‐based transmission detector for IMRT delivery monitoring: a validation study
Li, Taoran; Wu, Q. Jackie; Matzen, Thomas; Yin, Fang‐Fang
2016-01-01
The purpose of this work was to evaluate the potential of a new transmission detector for real‐time quality assurance of dynamic‐MLC‐based radiotherapy. The accuracy of detecting dose variation and static/dynamic MLC position deviations was measured, as well as the impact of the device on the radiation field (surface dose, transmission). Measured dose variations agreed with the known variations within 0.3%. The measurement of static and dynamic MLC position deviations matched the known deviations with high accuracy (0.7–1.2 mm). The absorption of the device was minimal (∼ 1%). The increased surface dose was small (1%–9%) but, when added to existing collimator scatter effects could become significant at large field sizes (≥30×30 cm2). Overall the accuracy and speed of the device show good potential for real‐time quality assurance. PACS number(s): 87.55.Qr PMID:27685115
Multi-look fusion identification: a paradigm shift from quality to quantity in data samples
NASA Astrophysics Data System (ADS)
Wong, S.
2009-05-01
A multi-look identification method known as score-level fusion is found to be capable of achieving very high identification accuracy, even when low quality target signatures are used. Analysis using measured ground vehicle radar signatures has shown that a 97% correct identification rate can be achieved using this multi-look fusion method; in contrast, only a 37% accuracy rate is obtained when single target signature input is used. The results suggest that quantity can be used to replace quality of the target data in improving identification accuracy. With the advent of sensor technology, a large amount of target signatures of marginal quality can be captured routinely. This quantity over quality approach allows maximum exploitation of the available data to improve the target identification performance and this could have the potential of being developed into a disruptive technology.
Cleveland, M A; Hickey, J M
2013-08-01
Genomic selection can be implemented in pig breeding at a reduced cost using genotype imputation. Accuracy of imputation and the impact on resulting genomic breeding values (gEBV) was investigated. High-density genotype data was available for 4,763 animals from a single pig line. Three low-density genotype panels were constructed with SNP densities of 450 (L450), 3,071 (L3k) and 5,963 (L6k). Accuracy of imputation was determined using 184 test individuals with no genotyped descendants in the data but with parents and grandparents genotyped using the Illumina PorcineSNP60 Beadchip. Alternative genotyping scenarios were created in which parents, grandparents, and individuals that were not direct ancestors of test animals (Other) were genotyped at high density (S1), grandparents were not genotyped (S2), dams and granddams were not genotyped (S3), and dams and granddams were genotyped at low density (S4). Four additional scenarios were created by excluding Other animal genotypes. Test individuals were always genotyped at low density. Imputation was performed with AlphaImpute. Genomic breeding values were calculated using the single-step genomic evaluation. Test animals were evaluated for the information retained in the gEBV, calculated as the correlation between gEBV using imputed genotypes and gEBV using true genotypes. Accuracy of imputation was high for all scenarios but decreased with fewer SNP on the low-density panel (0.995 to 0.965 for S1) and with reduced genotyping of ancestors, where the largest changes were for L450 (0.965 in S1 to 0.914 in S3). Exclusion of genotypes for Other animals resulted in only small accuracy decreases. Imputation accuracy was not consistent across the genome. Information retained in the gEBV was related to genotyping scenario and thus to imputation accuracy. Reducing the number of SNP on the low-density panel reduced the information retained in the gEBV, with the largest decrease observed from L3k to L450. Excluding Other animal genotypes had little impact on imputation accuracy but caused large decreases in the information retained in the gEBV. These results indicate that accuracy of gEBV from imputed genotypes depends on the level of genotyping in close relatives and the size of the genotyped dataset. Fewer high-density genotyped individuals are needed to obtain accurate imputation than are needed to obtain accurate gEBV. Strategies to optimize development of low-density panels can improve both imputation and gEBV accuracy.
High-Accuracy Multisensor Geolocation Technology to Support Geophysical Data Collection at MEC Sites
2012-12-01
image with intensity data in a single step. Flash LiDAR can use both basic solutions to emit laser , either a single pulse with large aperture will...45 6. LASER SENSOR DEVELOPMENTS...and a terrestrial laser scanner (TLS). State-of-the-art GPS navigation allows for cm- accurate positioning in open areas where a sufficient number
ERIC Educational Resources Information Center
Tsatsanis, Katherine D.; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S.; Klin, Ami; Volkmar, Fred R.
2003-01-01
The concurrent validity of the original and revised versions of the Leiter International Performance Scale was examined with 26 children (ages 4-16) with autism. Although the correlation between the two tests was high (.87), there were significant intra-individual discrepancies present in 10 cases, two of which were both large and clinically…
Iwazawa, J; Ohue, S; Hashimoto, N; Mitani, T
2014-02-01
To compare the accuracy of computer software analysis using three different target-definition protocols to detect tumour feeder vessels for transarterial chemoembolization of hepatocellular carcinoma. C-arm computed tomography (CT) data were analysed for 81 tumours from 57 patients who had undergone chemoembolization using software-assisted detection of tumour feeders. Small, medium, and large-sized targets were manually defined for each tumour. The tumour feeder was verified when the target tumour was enhanced on selective C-arm CT of the investigated vessel during chemoembolization. The sensitivity, specificity, and accuracy of the three protocols were evaluated and compared. One hundred and eight feeder vessels supplying 81 lesions were detected. The sensitivity of the small, medium, and large target protocols was 79.8%, 91.7%, and 96.3%, respectively; specificity was 95%, 88%, and 50%, respectively; and accuracy was 87.5%, 89.9%, and 74%, respectively. The sensitivity was significantly higher for the medium (p = 0.003) and large (p < 0.001) target protocols than for the small target protocol. The specificity and accuracy were higher for the small (p < 0.001 and p < 0.001, respectively) and medium (p < 0.001 and p < 0.001, respectively) target protocols than for the large target protocol. The overall accuracy of software-assisted automated feeder analysis in transarterial chemoembolization for hepatocellular carcinoma is affected by the target definition size. A large target definition increases sensitivity and decreases specificity in detecting tumour feeders. A target size equivalent to the tumour size most accurately predicts tumour feeders. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Developing Large Scale Explosively Driven Flyer Experiments on Sand
NASA Astrophysics Data System (ADS)
Rehagen, Thomas; Kraus, Richard
2017-06-01
Measurements of the dynamic behavior of granular materials are of great importance to a variety of scientific and engineering applications, including planetary science, seismology, and construction and destruction. In addition, high quality data are needed to enhance our understanding of granular physics and improve the computational models used to simulate related physical processes. However, since there is a non-negligible grain size associated with these materials, experiments must be of a relatively large scale in order to capture the continuum response of the material and reduce errors associated with the finite grain size. We will present designs for explosively driven flyer experiments to make high accuracy measurements of the Hugoniot of sand (with a grain size of hundreds of microns). To achieve an accuracy of better than a few percent in density, we are developing a platform to measure the Hugoniot of samples several centimeters in thickness. We will present the target designs as well as coupled designs for the explosively launched flyer system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
Deep learning as a tool to distinguish between high orbital angular momentum optical modes
NASA Astrophysics Data System (ADS)
Knutson, E. M.; Lohani, Sanjaya; Danaci, Onur; Huver, Sean D.; Glasser, Ryan T.
2016-09-01
The generation of light containing large degrees of orbital angular momentum (OAM) has recently been demon- strated in both the classical and quantum regimes. Since there is no fundamental limit to how many quanta of OAM a single photon can carry, optical states with an arbitrarily high difference in this quantum number may, in principle, be entangled. This opens the door to investigations into high-dimensional entanglement shared between states in superpositions of nonzero OAM. Additionally, making use of non-zero OAM states can allow for a dramatic increase in the amount of information carried by a single photon, thus increasing the information capacity of a communication channel. In practice, however, it is difficult to differentiate between states with high OAM numbers with high precision. Here we investigate the ability of deep neural networks to differentiate between states that contain large values of OAM. We show that such networks may be used to differentiate be- tween nearby OAM states that contain realistic amounts of noise, with OAM values of up to 100. Additionally, we examine how the classification accuracy scales with the signal-to-noise ratio of images that are used to train the network, as well as those being tested. Finally, we demonstrate the simultaneous classification of < 100 OAM states with greater than 70 % accuracy. We intend to verify our system with experimentally-produced classi- cal OAM states, as well as investigate possibilities that would allow this technique to work in the few-photon quantum regime.
ERIC Educational Resources Information Center
Pfaffel, Andreas; Spiel, Christiane
2016-01-01
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Efficient Ab initio Modeling of Random Multicomponent Alloys
Jiang, Chao; Uberuaga, Blas P.
2016-03-08
Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less
Limits on the Accuracy of Linking. Research Report. ETS RR-10-22
ERIC Educational Resources Information Center
Haberman, Shelby J.
2010-01-01
Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…
Kronewitter, Scott R; An, Hyun Joo; de Leoz, Maria Lorna; Lebrilla, Carlito B; Miyamoto, Suzanne; Leiserowitz, Gary S
2009-06-01
Annotation of the human serum N-linked glycome is a formidable challenge but is necessary for disease marker discovery. A new theoretical glycan library was constructed and proposed to provide all possible glycan compositions in serum. It was developed based on established glycobiology and retrosynthetic state-transition networks. We find that at least 331 compositions are possible in the serum N-linked glycome. By pairing the theoretical glycan mass library with a high mass accuracy and high-resolution MS, human serum glycans were effectively profiled. Correct isotopic envelope deconvolution to monoisotopic masses and the high mass accuracy instruments drastically reduced the amount of false composition assignments. The high throughput capacity enabled by this library permitted the rapid glycan profiling of large control populations. With the use of the library, a human serum glycan mass profile was developed from 46 healthy individuals. This paper presents a theoretical N-linked glycan mass library that was used for accurate high-throughput human serum glycan profiling. Rapid methods for evaluating a patient's glycome are instrumental for studying glycan-based markers.
NASA Astrophysics Data System (ADS)
Kuhn, C.; Richey, J. E.; Striegl, R. G.; Ward, N.; Sawakuchi, H. O.; Crawford, J.; Loken, L. C.; Stadler, P.; Dornblaser, M.; Butman, D. E.
2017-12-01
More than 93% of the world's river-water volume occurs in basins impacted by large dams and about 43% of river water discharge is impacted by flow regulation. Human land use also alters nutrient and carbon cycling and the emission of carbon dioxide from inland reservoirs. Increased water residence times and warmer temperatures in reservoirs fundamentally alter the physical settings for biogeochemical processing in large rivers, yet river biogeochemistry for many large systems remains undersampled. Satellite remote sensing holds promise as a methodology for responsive regional and global water resources management. Decades of ocean optics research has laid the foundation for the use of remote sensing reflectance in optical wavelengths (400 - 700 nm) to produce satellite-derived, near-surface estimates of phytoplankton chlorophyll concentration. Significant improvements between successive generations of ocean color sensors have enabled the scientific community to document changes in global ocean productivity (NPP) and estimate ocean biomass with increasing accuracy. Despite large advances in ocean optics, application of optical methods to inland waters has been limited to date due to their optical complexity and small spatial scale. To test this frontier, we present a study evaluating the accuracy and suitability of empirical inversion approaches for estimating chlorophyll-a, turbidity and temperature for the Amazon, Columbia and Mississippi rivers using satellite remote sensing. We demonstrate how riverine biogeochemical measurements collected at high frequencies from underway vessels can be used as in situ matchups to evaluate remotely-sensed, near-surface temperature, turbidity, chlorophyll-a derived from the Landsat 8 (NASA) and Sentinel 2 (ESA) satellites. We investigate the use of remote sensing water reflectance to infer trophic status as well as tributary influences on the optical characteristics of the Amazon, Mississippi and Columbia rivers.
Lenz, Patrick R N; Beaulieu, Jean; Mansfield, Shawn D; Clément, Sébastien; Desponts, Mireille; Bousquet, Jean
2017-04-28
Genomic selection (GS) uses information from genomic signatures consisting of thousands of genetic markers to predict complex traits. As such, GS represents a promising approach to accelerate tree breeding, which is especially relevant for the genetic improvement of boreal conifers characterized by long breeding cycles. In the present study, we tested GS in an advanced-breeding population of the boreal black spruce (Picea mariana [Mill.] BSP) for growth and wood quality traits, and concurrently examined factors affecting GS model accuracy. The study relied on 734 25-year-old trees belonging to 34 full-sib families derived from 27 parents and that were established on two contrasting sites. Genomic profiles were obtained from 4993 Single Nucleotide Polymorphisms (SNPs) representative of as many gene loci distributed among the 12 linkage groups common to spruce. GS models were obtained for four growth and wood traits. Validation using independent sets of trees showed that GS model accuracy was high, related to trait heritability and equivalent to that of conventional pedigree-based models. In forward selection, gains per unit of time were three times higher with the GS approach than with conventional selection. In addition, models were also accurate across sites, indicating little genotype-by-environment interaction in the area investigated. Using information from half-sibs instead of full-sibs led to a significant reduction in model accuracy, indicating that the inclusion of relatedness in the model contributed to its higher accuracies. About 500 to 1000 markers were sufficient to obtain GS model accuracy almost equivalent to that obtained with all markers, whether they were well spread across the genome or from a single linkage group, further confirming the implication of relatedness and potential long-range linkage disequilibrium (LD) in the high accuracy estimates obtained. Only slightly higher model accuracy was obtained when using marker subsets that were identified to carry large effects, indicating a minor role for short-range LD in this population. This study supports the integration of GS models in advanced-generation tree breeding programs, given that high genomic prediction accuracy was obtained with a relatively small number of markers due to high relatedness and family structure in the population. In boreal spruce breeding programs and similar ones with long breeding cycles, much larger gain per unit of time can be obtained from genomic selection at an early age than by the conventional approach. GS thus appears highly profitable, especially in the context of forward selection in species which are amenable to mass vegetative propagation of selected stock, such as spruces.
Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping
2015-09-01
The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.
High-resolution phylogenetic microbial community profiling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singer, Esther; Coleman-Derr, Devin; Bowman, Brett
2014-03-17
The representation of bacterial and archaeal genome sequences is strongly biased towards cultivated organisms, which belong to merely four phylogenetic groups. Functional information and inter-phylum level relationships are still largely underexplored for candidate phyla, which are often referred to as microbial dark matter. Furthermore, a large portion of the 16S rRNA gene records in the GenBank database are labeled as environmental samples and unclassified, which is in part due to low read accuracy, potential chimeric sequences produced during PCR amplifications and the low resolution of short amplicons. In order to improve the phylogenetic classification of novel species and advance ourmore » knowledge of the ecosystem function of uncultivated microorganisms, high-throughput full length 16S rRNA gene sequencing methodologies with reduced biases are needed. We evaluated the performance of PacBio single-molecule real-time (SMRT) sequencing in high-resolution phylogenetic microbial community profiling. For this purpose, we compared PacBio and Illumina metagenomic shotgun and 16S rRNA gene sequencing of a mock community as well as of an environmental sample from Sakinaw Lake, British Columbia. Sakinaw Lake is known to contain a large age of microbial species from candidate phyla. Sequencing results show that community structure based on PacBio shotgun and 16S rRNA gene sequences is highly similar in both the mock and the environmental communities. Resolution power and community representation accuracy from SMRT sequencing data appeared to be independent of GC content of microbial genomes and was higher when compared to Illumina-based metagenome shotgun and 16S rRNA gene (iTag) sequences, e.g. full-length sequencing resolved all 23 OTUs in the mock community, while iTags did not resolve closely related species. SMRT sequencing hence offers various potential benefits when characterizing uncharted microbial communities.« less
Modal analysis of circular Bragg fibers with arbitrary index profiles
NASA Astrophysics Data System (ADS)
Horikis, Theodoros P.; Kath, William L.
2006-12-01
A finite-difference approach based upon the immersed interface method is used to analyze the mode structure of Bragg fibers with arbitrary index profiles. The method allows general propagation constants and eigenmodes to be calculated to a high degree of accuracy, while computation times are kept to a minimum by exploiting sparse matrix algebra. The method is well suited to handle complicated structures comprised of a large number of thin layers with high-index contrast and simultaneously determines multiple eigenmodes without modification.
NASA Astrophysics Data System (ADS)
Bae, Young K.
2006-01-01
Formation flying of clusters of micro-, nano- and pico-satellites has been recognized to be more affordable, robust and versatile than building a large monolithic satellite in implementing next generation space missions requiring large apertures or large sample collection areas and sophisticated earth imaging/monitoring. We propose a propellant free, thus contamination free, method that enables ultrahigh precision satellite formation flying with intersatellite distance accuracy of nm (10-9 m) at maximum estimated distances in the order of tens of km. The method is based on ultrahigh precision CW intracavity photon thrusters and tethers. The pushing-out force of the intracavity photon thruster and the pulling-in force of the tether tension between satellites form the basic force structure to stabilize crystalline-like structures of satellites and/or spacecrafts with a relative distance accuracy better than nm. The thrust of the photons can be amplified by up to tens of thousand times by bouncing them between two mirrors located separately on pairing satellites. For example, a 10 W photon thruster, suitable for micro-satellite applications, is theoretically capable of providing thrusts up to mN, and its weight and power consumption are estimated to be several kgs and tens of W, respectively. The dual usage of photon thruster as a precision laser source for the interferometric ranging system further simplifies the system architecture and minimizes the weight and power consumption. The present method does not require propellant, thus provides significant propulsion system mass savings, and is free from propellant exhaust contamination, ideal for missions that require large apertures composed of highly sensitive sensors. The system can be readily scaled down for the nano- and pico-satellite applications.
Hu, Haixiang; Zhang, Xin; Ford, Virginia; Luo, Xiao; Qi, Erhui; Zeng, Xuefeng; Zhang, Xuejun
2016-11-14
Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the 'heterocercal' tool influence function (TIF). Generated from compound motion equipment, this type of TIF can 'transfer' the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the 'heterocercal' TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope.
Validation of maternal reports for low birthweight and preterm birth indicators in rural Nepal.
Chang, Karen T; Mullany, Luke C; Khatry, Subarna K; LeClerq, Steven C; Munos, Melinda K; Katz, Joanne
2018-06-01
Tracking progress towards global newborn health targets depends largely on maternal reported data collected through large, nationally representative surveys. We evaluated the validity, across a range of recall period lengths (1 to 24 months post-delivery), of maternal report of birthweight, birth size and length of pregnancy. We compared maternal reports to reference standards of birthweights measured within 72 hours of delivery and gestational age generated from reported first day of the last menstrual period (LMP) prospectively collected as part of a population-based study (n = 1502). We calculated sensitivity, specificity, area the under the receiver operating curve (AUC) as a measure of individual-level accuracy, and the inflation factor (IF) to quantify population-level bias for each indicator. We assessed if length of recall period modified accuracy by stratifying measurements across time bins and using a modified Poisson regression with robust error variance to estimate the relative risk (RR) of correctly classifying newborns as low birthweight (LBW) or preterm, adjusting for child sex, place of delivery, maternal age, maternal education, parity, and ethnicity. The LBW indicator using maternally reported birthweight in grams had low individual-level accuracy (AUC = 0.69) and high population-level bias (inflation factor IF = 0.62). LBW using maternally reported birth size and the preterm birth indicator had lower individual-level accuracy (AUC = 0.58 and 0.56, respectively) and higher population-level bias (IF = 0.28 and 0.35, respectively) up to 24 months following birth. Length of recall time did not affect accuracy of LBW indicators. For the preterm birth indicator, accuracy did not change with length of recall up to 20 months after birth and improved slightly beyond 20 months. The use of maternal reports may underestimate and bias indicators for LBW and preterm birth. In settings with high prevalence of LBW and preterm births, these indicators generated from maternal reports may be more vulnerable to misclassification. In populations where an important proportion of births occur at home or where weight is not routinely measured, mothers perhaps place less importance on remembering size at birth. Further work is needed to explore whether these conclusions on the validity of maternal reports hold in similar rural and low-income settings.
A new algorithm for microwave delay estimation from water vapor radiometer data
NASA Technical Reports Server (NTRS)
Robinson, S. E.
1986-01-01
A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce high accuracy. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution which is then explicitly integrated to estimate wet path delays, in a second step. The intrinsic accuracy of this algorithm has been examined for two channel WVR data using path delays and stimulated observables computed from archived radiosonde data. It is found that annual RMS errors for a wide range of sites are in the range from 1.3 mm to 2.3 mm, in the absence of clouds. This is comparable to the best overall accuracy obtainable from conventional linear algorithms, which must be tailored to site and weather conditions using large radiosonde data bases. The new algorithm's accuracy and flexibility are indications that it may be a good candidate for almost all WVR data interpretation.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
Fraley, Stephanie I.; Athamanolap, Pornpat; Masek, Billie J.; Hardick, Justin; Carroll, Karen C.; Hsieh, Yu-Hsiang; Rothman, Richard E.; Gaydos, Charlotte A.; Wang, Tza-Huei; Yang, Samuel
2016-01-01
High Resolution Melt (HRM) is a versatile and rapid post-PCR DNA analysis technique primarily used to differentiate sequence variants among only a few short amplicons. We recently developed a one-vs-one support vector machine algorithm (OVO SVM) that enables the use of HRM for identifying numerous short amplicon sequences automatically and reliably. Herein, we set out to maximize the discriminating power of HRM + SVM for a single genetic locus by testing longer amplicons harboring significantly more sequence information. Using universal primers that amplify the hypervariable bacterial 16 S rRNA gene as a model system, we found that long amplicons yield more complex HRM curve shapes. We developed a novel nested OVO SVM approach to take advantage of this feature and achieved 100% accuracy in the identification of 37 clinically relevant bacteria in Leave-One-Out-Cross-Validation. A subset of organisms were independently tested. Those from pure culture were identified with high accuracy, while those tested directly from clinical blood bottles displayed more technical variability and reduced accuracy. Our findings demonstrate that long sequences can be accurately and automatically profiled by HRM with a novel nested SVM approach and suggest that clinical sample testing is feasible with further optimization. PMID:26778280
NASA Astrophysics Data System (ADS)
Zhang, Hongming; Baartman, Jantiene E. M.; Yang, Xiaomei; Gai, Lingtong; Geissen, Violette
2017-04-01
Most crops in northern China are irrigated, but the topography affects water use, soil erosion, runoff and yields,. Technologies for collecting high-resolution topographic data are essential for adequately assessing these effects. Ground surveys and techniques of light detection and ranging have good accuracy, but data acquisition can be time-consuming and expensive for large catchments. Recent rapid technological development has provided new, flexible, high-resolution methods for collecting topographic data, such as photogrammetry using unmanned aerial vehicles (UAVs). The accuracy of UAV photogrammetry for generating high-resolution digital elevation models (DEMs) and for determining the width of irrigation channels, however, has not been assessed. We used a fixed-wing UAV for collecting high-resolution (0.15 m) topographic data for the Hetao irrigation district, the third largest irrigation district in China. We surveyed 112 ground checkpoints (GCPs) using a real-time kinematic global positioning system to evaluate the accuracy of the DEMs and channel widths. A comparison of manually measured channel widths with the widths derived from the DEMs indicated that the DEM-derived widths had vertical and horizontal root mean square errors of 13.0 and 7.9 cm, respectively. UAV photogrammetric data can thus be used for land surveying, digital mapping, calculating channel capacity, monitoring crops, and predicting yields, with the advantages of economy, speed, and ease.
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Limitations and potentials of current motif discovery algorithms
Hu, Jianjun; Li, Bin; Kihara, Daisuke
2005-01-01
Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert
Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Janke, Robert
Network model detail can influence the accuracy of results from analyses of water distribution systems. Previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregated adversemore » effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. However, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
Stitching Type Large Aperture Depolarizer for Gas Monitoring Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Liu, X.; Li, M.; An, N.; Zhang, T.; Cao, G.; Cheng, S.
2018-04-01
To increase the accuracy of radiation measurement for gas monitoring imaging spectrometer, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred method in space instrument is to introduce the depolarizer into the optical system. It is a combination device of birefringence crystal wedges. Limited to the actual diameter of the crystal, the traditional depolarizer cannot be used in the large aperture imaging spectrometer (greater than 100 mm). In this paper, a stitching type depolarizer is presented. The design theory and numerical calculation model for dual babinet depolarizer were built. As required radiometric accuracies of the imaging spectrometer with 250 mm × 46 mm aperture, a stitching type dual babinet depolarizer was design in detail. Based on designing the optimum structural parmeters the tolerance of wedge angle refractive index, and central thickness were given. The analysis results show that the maximum residual polarization degree of output light from depolarizer is less than 2 %. The design requirements of polarization sensitivity is satisfied.
Influence of Network Model Detail on Estimated Health Effects of Drinking Water Contamination Events
Davis, Michael J.; Janke, Robert
2015-01-01
Network model detail can influence the accuracy of results from analyses of water distribution systems. Some previous work has shown the limitations of skeletonized network models when considering water quality and hydraulic effects. Loss of model detail is potentially less important for aggregated effects such as the systemwide health effects associated with a contamination event, but has received limited attention. The influence of model detail on such effects is examined here by comparing results obtained for contamination events using three large network models and several skeletonized versions of the models. Loss of model detail decreases the accuracy of estimated aggregatedmore » adverse effects related to contamination events. It has the potential to have a large negative influence on the results of consequence assessments and the design of contamination warning systems. But, the adverse influence on analysis results can be minimized by restricting attention to high percentile effects (i.e., 95th percentile or higher).« less
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R
2017-02-01
Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16 m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.
Comparison of pulse sequences for R1-based electron paramagnetic resonance oxygen imaging.
Epel, Boris; Halpern, Howard J
2015-05-01
Electron paramagnetic resonance (EPR) spin-lattice relaxation (SLR) oxygen imaging has proven to be an indispensable tool for assessing oxygen partial pressure in live animals. EPR oxygen images show remarkable oxygen accuracy when combined with high precision and spatial resolution. Developing more effective means for obtaining SLR rates is of great practical, biological and medical importance. In this work we compared different pulse EPR imaging protocols and pulse sequences to establish advantages and areas of applicability for each method. Tests were performed using phantoms containing spin probes with oxygen concentrations relevant to in vivo oxymetry. We have found that for small animal size objects the inversion recovery sequence combined with the filtered backprojection reconstruction method delivers the best accuracy and precision. For large animals, in which large radio frequency energy deposition might be critical, free induction decay and three pulse stimulated echo sequences might find better practical usage. Copyright © 2015 Elsevier Inc. All rights reserved.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
NASA Astrophysics Data System (ADS)
Vincenti, Henri; Vay, Jean-Luc
2018-07-01
The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.
Detection of convulsive seizures using surface electromyography.
Beniczky, Sándor; Conradsen, Isa; Wolf, Peter
2018-06-01
Bilateral (generalized) tonic-clonic seizures (TCS) increase the risk of sudden unexpected death in epilepsy (SUDEP), especially when patients are unattended. In sleep, TCS often remain unnoticed, which can result in suboptimal treatment decisions. There is a need for automated detection of these major epileptic seizures, using wearable devices. Quantitative surface electromyography (EMG) changes are specific for TCS and characterized by a dynamic evolution of low- and high-frequency signal components. Algorithms targeting increase in high-frequency EMG signals constitute biomarkers of TCS; they can be used both for seizure detection and for differentiating TCS from convulsive nonepileptic seizures. Two large-scale, blinded, prospective studies demonstrated the accuracy of wearable EMG devices for detecting TCS with high sensitivity (76%-100%). The rate of false alarms (0.7-2.5/24 h) needs further improvement. This article summarizes the pathophysiology of muscle activation during convulsive seizures and reviews the published evidence on the accuracy of EMG-based seizure detection. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
DSouza, Adora M.; Abidin, Anas Z.; Leistritz, Lutz; Wismüller, Axel
2017-02-01
We investigate the applicability of large-scale Granger Causality (lsGC) for extracting a measure of multivariate information flow between pairs of regional brain activities from resting-state functional MRI (fMRI) and test the effectiveness of these measures for predicting a disease state. Such pairwise multivariate measures of interaction provide high-dimensional representations of connectivity profiles for each subject and are used in a machine learning task to distinguish between healthy controls and individuals presenting with symptoms of HIV Associated Neurocognitive Disorder (HAND). Cognitive impairment in several domains can occur as a result of HIV infection of the central nervous system. The current paradigm for assessing such impairment is through neuropsychological testing. With fMRI data analysis, we aim at non-invasively capturing differences in brain connectivity patterns between healthy subjects and subjects presenting with symptoms of HAND. To classify the extracted interaction patterns among brain regions, we use a prototype-based learning algorithm called Generalized Matrix Learning Vector Quantization (GMLVQ). Our approach to characterize connectivity using lsGC followed by GMLVQ for subsequent classification yields good prediction results with an accuracy of 87% and an area under the ROC curve (AUC) of up to 0.90. We obtain a statistically significant improvement (p<0.01) over a conventional Granger causality approach (accuracy = 0.76, AUC = 0.74). High accuracy and AUC values using our multivariate method to connectivity analysis suggests that our approach is able to better capture changes in interaction patterns between different brain regions when compared to conventional Granger causality analysis known from the literature.
Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept
Elsaadany, Ahmed; Wen-jun, Yi
2014-01-01
Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873
Accuracy improvement capability of advanced projectile based on course correction fuze concept.
Elsaadany, Ahmed; Wen-jun, Yi
2014-01-01
Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.
A robust omnifont open-vocabulary Arabic OCR system using pseudo-2D-HMM
NASA Astrophysics Data System (ADS)
Rashwan, Abdullah M.; Rashwan, Mohsen A.; Abdel-Hameed, Ahmed; Abdou, Sherif; Khalil, A. H.
2012-01-01
Recognizing old documents is highly desirable since the demand for quickly searching millions of archived documents has recently increased. Using Hidden Markov Models (HMMs) has been proven to be a good solution to tackle the main problems of recognizing typewritten Arabic characters. These attempts however achieved a remarkable success for omnifont OCR under very favorable conditions, they didn't achieve the same performance in practical conditions, i.e. noisy documents. In this paper we present an omnifont, large-vocabulary Arabic OCR system using Pseudo Two Dimensional Hidden Markov Model (P2DHMM), which is a generalization of the HMM. P2DHMM offers a more efficient way to model the Arabic characters, such model offer both minimal dependency on the font size/style (omnifont), and high level of robustness against noise. The evaluation results of this system are very promising compared to a baseline HMM system and best OCRs available in the market (Sakhr and NovoDynamics). The recognition accuracy of the P2DHMM classifier is measured against the classic HMM classifier, the average word accuracy rates for P2DHMM and HMM classifiers are 79% and 66% respectively. The overall system accuracy is measured against Sakhr and NovoDynamics OCR systems, the average word accuracy rates for P2DHMM, NovoDynamics, and Sakhr are 74%, 71%, and 61% respectively.
The structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayder, M. E.; Povinelli, Louis A.
1993-01-01
Large-eddy simulation of a supersonic jet is presented with emphasis on capturing the unsteady features of the flow pertinent to sound emission. A high-accuracy numerical scheme is used to solve the filtered, unsteady, compressible Navier-Stokes equations while modelling the subgrid-scale turbulence. For random inflow disturbance, the wave-like feature of the large-scale structure is demonstrated. The large-scale structure was then enhanced by imposing harmonic disturbances to the inflow. The limitation of using the full Navier-Stokes equation to calculate the far-field sound is discussed. Application of Lighthill's acoustic analogy is given with the objective of highlighting the difficulties that arise from the non-compactness of the source term.
Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-06-10
In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.
Does filler database size influence identification accuracy?
Bergold, Amanda N; Heaton, Paul
2018-06-01
Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D
2015-06-15
Purpose: AAPM radiation therapy committee task group No. 66 (TG-66) published a report which described a general approach to CT simulator QA. The report outlines the testing procedures and specifications for the evaluation of patient dose, radiation safety, electromechanical components, and image quality for a CT simulator. The purpose of this study is to thoroughly evaluate the performance of a second generation Toshiba Aquilion Large Bore CT simulator with 90 cm bore size (Toshiba, Nasu, JP) based on the TG-66 criteria. The testing procedures and results from this study provide baselines for a routine QA program. Methods: Different measurements andmore » analysis were performed including CTDIvol measurements, alignment and orientation of gantry lasers, orientation of the tabletop with respect to the imaging plane, table movement and indexing accuracy, Scanogram location accuracy, high contrast spatial resolution, low contrast resolution, field uniformity, CT number accuracy, mA linearity and mA reproducibility using a number of different phantoms and measuring devices, such as CTDI phantom, ACR image quality phantom, TG-66 laser QA phantom, pencil ion chamber (Fluke Victoreen) and electrometer (RTI Solidose 400). Results: The CTDI measurements were within 20% of the console displayed values. The alignment and orientation for both gantry laser and tabletop, as well as the table movement and indexing and scanogram location accuracy were within 2mm as specified in TG66. The spatial resolution, low contrast resolution, field uniformity and CT number accuracy were all within ACR’s recommended limits. The mA linearity and reproducibility were both well below the TG66 threshold. Conclusion: The 90 cm bore size second generation Toshiba Aquilion Large Bore CT simulator that comes with 70 cm true FOV can consistently meet various clinical needs. The results demonstrated that this simulator complies with the TG-66 protocol in all aspects including electromechanical component, radiation safety component, and image quality component. Employee of Toshiba America Medical Systems.« less
NASA Astrophysics Data System (ADS)
Rupasinghe, P. A.; Markle, C. E.; Marcaccio, J. V.; Chow-Fraser, P.
2017-12-01
Phragmites australis (European common reed), is a relatively recent invader of wetlands and beaches in Ontario. It can establish large homogenous stands within wetlands and disperse widely throughout the landscape by wind and vehicular traffic. A first step in managing this invasive species includes accurate mapping and quantification of its distribution. This is challenging because Phragimtes is distributed in a large spatial extent, which makes the mapping more costly and time consuming. Here, we used freely available multispectral satellite images taken monthly (cloud free images as available) for the calendar year to determine the optimum phenological state of Phragmites that would allow it to be accurately identified using remote sensing data. We analyzed time series, Landsat-8 OLI and Sentinel-2 images for Big Creek Wildlife Area, ON using image classification (Support Vector Machines), Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). We used field sampling data and high resolution image collected using Unmanned Aerial Vehicle (UAV; 8 cm spatial resolution) as training data and for the validation of the classified images. The accuracy for all land cover classes and for Phragmites alone were low at both the start and end of the calendar year, but reached overall accuracy >85% by mid to late summer. The highest classification accuracies for Landsat-8 OLI were associated with late July and early August imagery. We observed similar trends using the Sentinel-2 images, with higher overall accuracy for all land cover classes and for Phragmites alone from late July to late September. During this period, we found the greatest difference between Phragmites and Typha, commonly confused classes, with respect to near-infrared and shortwave infrared reflectance. Therefore, the unique spectral signature of Phragmites can be attributed to both the level of greenness and factors related to water content in the leaves during late summer. Landsat-8 OLI or Sentinel-2 images acquired in late summer can be used as a cost effective approach to mapping Phragmites at a large spatial scale without sacrificing accuracy.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
NASA Astrophysics Data System (ADS)
Wu, Dongxu; Qiao, Zheng; Wang, Bo; Wang, Huiming; Li, Guo
2014-08-01
In this paper, a four-axis ultra-precision lathe for machining large-scale drum mould with microstructured surface is presented. Firstly, because of the large dimension and weight of drum workpiece, as well as high requirement of machining accuracy, the design guidelines and component parts of this drum lathe is introduced in detail, including control system, moving and driving components, position feedback system and so on. Additionally, the weight of drum workpiece would result in the structural deformation of this lathe, therefore, this paper analyses the effect of structural deformation on machining accuracy by means of ANSYS. The position change is approximately 16.9nm in the X-direction(sensitive direction) which could be negligible. Finally, in order to study the impact of bearing parameters on the load characteristics of aerostatic journal bearing, one of the famous computational fluid dynamics(CFD) software, FLUENT, is adopted, and a series of simulations are carried out. The result shows that the aerostatic spindle has superior performance of carrying capacity and stiffness, it is possible for this lathe to bear the weight of drum workpiece up to 1000kg since there are two aerostatic spindles in the headstock and tailstock.
NASA Astrophysics Data System (ADS)
Schneider, C. A.; Aggett, G. R.; Hattendorf, M. J.
2007-12-01
Better information on evapotranspiration (ET) is essential to better understanding of consumptive use of water by crops. RTi is using NASA Earth-sun System research results and METRIC (Mapping ET at high Resolution with Internalized Calibration) to increase the repeatability and accuracy of consumptive use estimates. METRIC, an image-processing model for calculating ET as a residual of the surface energy balance, utilizes the thermal band on various satellite remote sensors. Calculating actual ET from satellites can avoid many of the assumptions driving other methods of calculating ET over a large area. Because it is physically based and does not rely on explicit knowledge of crop type in the field, a large potential source of error should be eliminated. This paper assesses sources of error in current operational estimates of ET for an area of the South Platte irrigated lands of Colorado, and benchmarks potential improvements in the accuracy of ET estimates gained using METRIC, as well as the processing efficiency of consumptive use demand for large irrigated lands. Examples highlighting how better water planning decisions and water management can be achieved via enhanced monitoring of the temporal and spatial relationships between water demand and water availability are provided.
Inelastic Transitions in Slow Collisions of Anti-Hydrogen with Hydrogen Atoms
NASA Astrophysics Data System (ADS)
Harrison, Robert; Krstic, Predrag
2007-06-01
We calculate excited adiabatic states and nonadiabatic coupling matrix elements of a quasimolecular system containing hydrogen and anti-hydrogen atoms, for a range of internuclear distances from 0.2 to 20 Bohrs. High accuracy is achieved by exact diagonalization of the molecular Hamiltionian in a large Gaussian basis. Nonadiabatic dynamics was calculated by solving MOCC equations. Positronium states are included in the consideration.
Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.
2016-01-01
Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these results, the combination of naturalistic movie stimuli and classification analysis in fMRI experiments may prove to be a sensitive tool for the assessment of changes in natural cognitive processes under experimental manipulation. PMID:27065832
Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes
NASA Astrophysics Data System (ADS)
Liao, H.; Meyer, F. J.
2016-12-01
Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a set of system (range bandwidth, temporal and spatial baseline) and processing parameters (e.g., filtering strength and sub-band configuration). A case study in Greenland is attached below.
Analysis of Radar and Optical Space Borne Data for Large Scale Topographical Mapping
NASA Astrophysics Data System (ADS)
Tampubolon, W.; Reinhardt, W.
2015-03-01
Normally, in order to provide high resolution 3 Dimension (3D) geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term "Rapid Mapping". In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS) imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs) from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar) or the usage of the GCPs in both, the optical and the radar satellite data processing. Finally this results will be used in the future as a reference for further geospatial data acquisitions to support topographical mapping in even larger scales up to the 1:10.000 map scale.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.
Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo
2016-03-01
Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Stenz, Ulrich; Neumann, Ingo
2017-01-01
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less. PMID:28812998
Application of AIS Technology to Forest Mapping
NASA Technical Reports Server (NTRS)
Yool, S. R.; Star, J. L.
1985-01-01
Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritsenko, Marina A.; Xu, Zhe; Liu, Tao
Comprehensive, quantitative information on abundances of proteins and their post-translational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labelling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification andmore » quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples, and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.« less
Gritsenko, Marina A; Xu, Zhe; Liu, Tao; Smith, Richard D
2016-01-01
Comprehensive, quantitative information on abundances of proteins and their posttranslational modifications (PTMs) can potentially provide novel biological insights into diseases pathogenesis and therapeutic intervention. Herein, we introduce a quantitative strategy utilizing isobaric stable isotope-labeling techniques combined with two-dimensional liquid chromatography-tandem mass spectrometry (2D-LC-MS/MS) for large-scale, deep quantitative proteome profiling of biological samples or clinical specimens such as tumor tissues. The workflow includes isobaric labeling of tryptic peptides for multiplexed and accurate quantitative analysis, basic reversed-phase LC fractionation and concatenation for reduced sample complexity, and nano-LC coupled to high resolution and high mass accuracy MS analysis for high confidence identification and quantification of proteins. This proteomic analysis strategy has been successfully applied for in-depth quantitative proteomic analysis of tumor samples and can also be used for integrated proteome and PTM characterization, as well as comprehensive quantitative proteomic analysis across samples from large clinical cohorts.
Structuring intuition with theory: The high-throughput way
NASA Astrophysics Data System (ADS)
Fornari, Marco
2015-03-01
First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.
Laser microprocessing and nanoengineering of large-area functional micro/nanostructures
NASA Astrophysics Data System (ADS)
Tang, M.; Xie, X. Z.; Yang, J.; Chen, Z. C.; Xu, L.; Choo, Y. S.; Hong, M. H.
2011-12-01
Laser microprocessing and nanoengineering are of great interest to both scientists and engineers, since the inspired properties of functional micro/nanostructures over large areas can lead to numerous unique applications. Currently laser processing systems combined with high speed automation ensure the focused laser beam to process various materials at a high throughput and a high accuracy over large working areas. UV lasers are widely used in both laser microprocessing and nanoengineering. However by improving the processing methods, green pulsed laser is capable of replacing UV lasers to make high aspect ratio micro-grooves on fragile and transparent sapphire substrates. Laser micro-texturing can also tune the wetting property of metal surfaces from hydrophilic to super-hydrophobic at a contact angle of 161° without chemical coating. Laser microlens array (MLA) can split a laser beam into multiple laser beams and reduce the laser spot size down to sub-microns. It can be applied to fabricate split ring resonator (SRR) meta-materials for THz sensing, surface plasmonic resonance (SPR) structures for NIR and molding tools for soft lithography. Furthermore, laser interference lithography combined with thermal annealing can obtain a large area of sub-50nm nano-dot clusters used for SPR applications.
Monitoring Building Deformation with InSAR: Experiments and Validation
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-01-01
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403
Epstein, Jeffery N.; Langberg, Joshua M.; Rosen, Paul J.; Graham, Amanda; Narad, Megan E.; Antonini, Tanya N.; Brinkman, William B.; Froehlich, Tanya; Simon, John O.; Altaye, Mekibib
2012-01-01
Objective The purpose of the research study was to examine the manifestation of variability in reaction times (RT) in children with Attention Deficit Hyperactivity Disorder (ADHD) and to examine whether RT variability presented differently across a variety of neuropsychological tasks, was present across the two most common ADHD subtypes, and whether it was affected by reward and event rate (ER) manipulations. Method Children with ADHD-Combined Type (n=51), ADHD-Predominantly Inattentive Type (n=53) and 47 controls completed five neuropsychological tasks (Choice Discrimination Task, Child Attentional Network Task, Go/No-Go task, Stop Signal Task, and N-back task), each allowing trial-by-trial assessment of reaction times. Multiple indicators of RT variability including RT standard deviation, coefficient of variation and ex-Gaussian tau were used. Results Children with ADHD demonstrated greater RT variability than controls across all five tasks as measured by the ex-Gaussian indicator tau. There were minimal differences in RT variability across the ADHD subtypes. Children with ADHD also had poorer task accuracy than controls across all tasks except the Choice Discrimination task. Although ER and reward manipulations did affect children’s RT variability and task accuracy, these manipulations largely did not differentially affect children with ADHD compared to controls. RT variability and task accuracy were highly correlated across tasks. Removing variance attributable to RT variability from task accuracy did not appreciably affect between-group differences in task accuracy. Conclusions High RT variability is a ubiquitous and robust phenomenon in children with ADHD. PMID:21463041
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Using Remote-sensing to Survey Topography and Morphologic Change on Large Braided River Beds
NASA Astrophysics Data System (ADS)
Maurice, D.; Hicks, M.; Shankar, U.
2007-12-01
Since 1999 we have made extensive use of a variety of remote-sensing technologies to survey bed topography over reaches of large braided gravel-bed rivers on the east coast of New Zealand's South Island. The motivations have been (i) to collect input and validation data for 2-d hydrodynamic models for quantifying in-stream physical habitat and for predicting flood levels and (ii) to survey spatially-distributed riverbed erosion and deposition in order to estimate bedload fluxes by the 'morphological' method. Typical applications have been to river reaches 3-4 km long and 1 km wide, with grid cells from 1-5 m. We use different techniques to survey dry and wet areas of braided riverbed. For dry areas, we have used digital photogrammetry and infra-red airborne LiDAR. For wetted channels, we have generally used ortho-rectified colour imagery or multi-spectral scanning to map water depth, then we map bed topography by subtracting the water depth from a DEM of the water surface obtained from photogrammetry or LiDAR. The imagery is calibrated to water depth using field measurements on the day of imagery acquisition. Surveys are undertaken during low flows to maximise bed exposure. We use ground-based RTK-GPS and echo-sounding to collect calibration and validation data, and sometimes simply use these methods to survey the wetted areas. Orthoimagery at multiple river flows is used to validate 2-d model results. We have been able to achieve elevation accuracies at interpolated points of the order of 10-15 cm for dry areas. This accuracy typically degrades to 20-30 cm for wetted areas. Our experience has exposed a number of issues relating to survey accuracy and practicality at large river scales. These include: changing geoidal models between surveys; local systematic error with photogrammetric model mosaics; geospatial synchronisation of multi-platform data; time-synchronisation of LiDAR and imagery- collecting aeroplanes and suitable weather and river conditions; confusions in water depth mapping; and the critical importance of good data at key hydraulic controls for eco-hydrologic applications. We suggest that high resolution bathymetric LiDAR offers the best potential for future surveys in large river reaches. While the current bathymetry LiDAR systems do not appear to deliver a significantly better accuracy of submerged bed elevations than we have achieved with mixed-technology approaches for dry and wet areas, and their cost remains high, a one-stop package is hard to beat in terms of practicality and data synchronisation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Mogensen, Kris M; Andrew, Benjamin Y; Corona, Jasmine C; Robinson, Malcolm K
2016-07-01
The Society of Critical Care Medicine (SCCM) and American Society for Parenteral and Enteral Nutrition (ASPEN) recommend that obese, critically ill patients receive 11-14 kcal/kg/d using actual body weight (ABW) or 22-25 kcal/kg/d using ideal body weight (IBW), because feeding these patients 50%-70% maintenance needs while administering high protein may improve outcomes. It is unknown whether these equations achieve this target when validated against indirect calorimetry, perform equally across all degrees of obesity, or compare well with other equations. Measured resting energy expenditure (MREE) was determined in obese (body mass index [BMI] ≥30 kg/m(2)), critically ill patients. Resting energy expenditure was predicted (PREE) using several equations: 12.5 kcal/kg ABW (ASPEN-Actual BW), 23.5 kcal/kg IBW (ASPEN-Ideal BW), Harris-Benedict (adjusted-weight and 1.5 stress-factor), and Ireton-Jones for obesity. Correlation of PREE to 65% MREE, predictive accuracy, precision, bias, and large error incidence were calculated. All equations were significantly correlated with 65% MREE but had poor predictive accuracy, had excessive large error incidence, were imprecise, and were biased in the entire cohort (N = 31). In the obesity cohort (n = 20, BMI 30-50 kg/m(2)), ASPEN-Actual BW had acceptable predictive accuracy and large error incidence, was unbiased, and was nearly precise. In super obesity (n = 11, BMI >50 kg/m(2)), ASPEN-Ideal BW had acceptable predictive accuracy and large error incidence and was precise and unbiased. SCCM/ASPEN-recommended body weight equations are reasonable predictors of 65% MREE depending on the equation and degree of obesity. Assuming that feeding 65% MREE is appropriate, this study suggests that patients with a BMI 30-50 kg/m(2) should receive 11-14 kcal/kg/d using ABW and those with a BMI >50 kg/m(2) should receive 22-25 kcal/kg/d using IBW. © 2015 American Society for Parenteral and Enteral Nutrition.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Daytime sky polarization calibration limitations
NASA Astrophysics Data System (ADS)
Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López
2017-01-01
The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
Thermal phase transition with full 2-loop effective potential
NASA Astrophysics Data System (ADS)
Laine, M.; Meyer, M.; Nardini, G.
2017-07-01
Theories with extended Higgs sectors constructed in view of cosmological ramifications (gravitational wave signal, baryogenesis, dark matter) are often faced with conflicting requirements for their couplings; in particular those influencing the strength of a phase transition may be large. Large couplings compromise perturbative studies, as well as the high-temperature expansion that is invoked in dimensionally reduced lattice investigations. With the example of the inert doublet extension of the Standard Model (IDM), we show how a resummed 2-loop effective potential can be computed without a high-T expansion, and use the result to scrutinize its accuracy. With the exception of Tc, which is sensitive to contributions from heavy modes, the high-T expansion is found to perform well. 2-loop corrections weaken the transition in IDM, but they are moderate, whereby a strong transition remains an option.
Rapid insights from remote sensing in the geosciences
NASA Astrophysics Data System (ADS)
Plaza, Antonio
2015-03-01
The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Short-arc orbit determination using coherent X-band ranging data
NASA Technical Reports Server (NTRS)
Thurman, S. W.; Mcelrath, T. P.; Pollmeier, V. M.
1992-01-01
The use of X-band frequencies in ground-spacecraft and spacecraft-ground telecommunication links for current and future robotic interplanetary missions makes it possible to perform ranging measurements of greater accuracy than previously obtained. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. The application of high-accuracy S/X-band and X-band ranging to orbit determination with relatively short data arcs is investigated in planetary approach and encounter scenarios. Actual trajectory solutions for the Ulysses spacecraft constructed from S/X-band ranging and Doppler data are presented; error covariance calculations are used to predict the performance of X-band ranging and Doppler data. The Ulysses trajectory solutions indicate that the aim point for the spacecraft's February 1992 Jupiter encounter was predicted to a geocentric accuracy of 0.20 to 0.23/microrad. Explicit modeling of range bias parameters for each station pass is shown to largely remove systematic ground system calibration errors and transmission media effects from the Ulysses range measurements, which would otherwise corrupt the angle finding capabilities of the data. The Ulysses solutions were found to be reasonably consistent with the theoretical results, which suggest that angular accuracies of 0.08 to 0.1/microrad are achievable with X-band ranging.
Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan
2012-01-01
It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810
Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan
2012-01-01
It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
MTO-like reference mask modeling for advanced inverse lithography technology patterns
NASA Astrophysics Data System (ADS)
Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min
2017-07-01
Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.
Matrices pattern using FIB; 'Out-of-the-box' way of thinking.
Fleger, Y; Gotlib-Vainshtein, K; Talyosef, Y
2017-03-01
Focused ion beam (FIB) is an extremely valuable tool in nanopatterning and nanofabrication for potentially high-resolution patterning, especially when refers to He ion beam microscopy. The work presented here demonstrates an 'out-of-the-box' method of writing using FIB, which enables creating very large matrices, up to the beam-shift limitation, in short times and with high accuracy unachievable by any other writing technique. The new method allows combining different shapes in nanometric dimensions and high resolutions for wide ranges. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Modeling the Lyα Forest in Collisionless Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Lukić, Zarija
2016-08-11
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less
MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.
2016-08-20
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less
Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao
2015-12-21
Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.
Long-term GPS tracking of ocean sunfish Mola mola offers a new direction in fish monitoring.
Sims, David W; Queiroz, Nuno; Humphries, Nicolas E; Lima, Fernando P; Hays, Graeme C
2009-10-09
Satellite tracking of large pelagic fish provides insights on free-ranging behaviour, distributions and population structuring. Up to now, such fish have been tracked remotely using two principal methods: direct positioning of transmitters by Argos polar-orbiting satellites, and satellite relay of tag-derived light-level data for post hoc track reconstruction. Error fields associated with positions determined by these methods range from hundreds of metres to hundreds of kilometres. However, low spatial accuracy of tracks masks important details, such as foraging patterns. Here we use a fast-acquisition global positioning system (Fastloc GPS) tag with remote data retrieval to track long-term movements, in near real time and position accuracy of <70 m, of the world's largest bony fish, the ocean sunfish Mola mola. Search-like movements occurred over at least three distinct spatial scales. At fine scales, sunfish spent longer in highly localised areas with faster, straighter excursions between them. These 'stopovers' during long-distance movement appear consistent with finding and exploiting food patches. This demonstrates the feasibility of GPS tagging to provide tracks of unparalleled accuracy for monitoring movements of large pelagic fish, and with nearly four times as many locations obtained by the GPS tag than by a conventional Argos transmitter. The results signal the potential of GPS-tagged pelagic fish that surface regularly to be detectors of resource 'hotspots' in the blue ocean and provides a new capability for understanding large pelagic fish behaviour and habitat use that is relevant to ocean management and species conservation.
Long-Term GPS Tracking of Ocean Sunfish Mola mola Offers a New Direction in Fish Monitoring
Sims, David W.; Queiroz, Nuno; Humphries, Nicolas E.; Lima, Fernando P.; Hays, Graeme C.
2009-01-01
Satellite tracking of large pelagic fish provides insights on free-ranging behaviour, distributions and population structuring. Up to now, such fish have been tracked remotely using two principal methods: direct positioning of transmitters by Argos polar-orbiting satellites, and satellite relay of tag-derived light-level data for post hoc track reconstruction. Error fields associated with positions determined by these methods range from hundreds of metres to hundreds of kilometres. However, low spatial accuracy of tracks masks important details, such as foraging patterns. Here we use a fast-acquisition global positioning system (Fastloc GPS) tag with remote data retrieval to track long-term movements, in near real time and position accuracy of <70 m, of the world's largest bony fish, the ocean sunfish Mola mola. Search-like movements occurred over at least three distinct spatial scales. At fine scales, sunfish spent longer in highly localised areas with faster, straighter excursions between them. These ‘stopovers’ during long-distance movement appear consistent with finding and exploiting food patches. This demonstrates the feasibility of GPS tagging to provide tracks of unparalleled accuracy for monitoring movements of large pelagic fish, and with nearly four times as many locations obtained by the GPS tag than by a conventional Argos transmitter. The results signal the potential of GPS-tagged pelagic fish that surface regularly to be detectors of resource ‘hotspots’ in the blue ocean and provides a new capability for understanding large pelagic fish behaviour and habitat use that is relevant to ocean management and species conservation. PMID:19816576
Accuracy comparison in mapping water bodies using Landsat images and Google Earth Images
NASA Astrophysics Data System (ADS)
Zhou, Z.; Zhou, X.
2016-12-01
A lot of research has been done for the extraction of water bodies with multiple satellite images. The Water Indexes with the use of multi-spectral images are the mostly used methods for the water bodies' extraction. In order to extract area of water bodies from satellite images, accuracy may depend on the spatial resolution of images and relative size of the water bodies. To quantify the impact of spatial resolution and size (major and minor lengths) of the water bodies on the accuracy of water area extraction, we use Georgetown Lake, Montana and coalbed methane (CBM) water retention ponds in the Montana Powder River Basin as test sites to evaluate the impact of spatial resolution and the size of water bodies on water area extraction. Data sources used include Landsat images and Google Earth images covering both large water bodies and small ponds. Firstly we used water indices to extract water coverage from Landsat images for both large lake and small ponds. Secondly we used a newly developed visible-index method to extract water coverage from Google Earth images covering both large lake and small ponds. Thirdly, we used the image fusion method in which the Google Earth Images are fused with multi-spectral Landsat images to obtain multi-spectral images of the same high spatial resolution as the Google earth images. The actual area of the lake and ponds are measured using GPS surveys. Results will be compared and the optimal method will be selected for water body extraction.
Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets.
Cole, J B; Newman, S; Foertter, F; Aguilar, I; Coffey, M
2012-03-01
Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A naïve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to 15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement.
Heinrich, Mattias P; Blendowski, Max; Oktay, Ozan
2018-05-30
Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy
2015-05-01
We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.
Evaluation of transition year Canadian test sites. [Saskatchewan Province
NASA Technical Reports Server (NTRS)
Payne, R. W. (Principal Investigator)
1980-01-01
The author has identified the following significant results. The spring small grain proportion accuracy in 15 Saskatchewan test sites was found to be comparable to that of the Large Area Crop Inventory Experiment Phase 3 and Transition Year results in the U.S. spring wheat states. Spring small grain labeling accuracy was 94%, and the direct wheat labeling accuracy was 89%, despite the low barley separation accuracy of 30%.
Accuracy assessment in the Large Area Crop Inventory Experiment
NASA Technical Reports Server (NTRS)
Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.
1979-01-01
The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.
NASA Astrophysics Data System (ADS)
Coleman, Michael J.
One class of deployable large aperture antenna consists of thin light-weight parabolic reflectors. A reflector of this type is a deployable structure that consists of an inflatable elastic membrane that is supported about its perimeter by a set of elastic tendons and is subjected to a constant hydrostatic pressure. A design may not hold the parabolic shape to within a desired tolerance due to an elastic deformation of the surface, particularly near the rim. We can compute the equilibrium configuration of the reflector system using an optimization-based solution procedure that calculates the total system energy and determines a configuration of minimum energy. Analysis of the equilibrium configuration reveals the behavior of the reflector shape under various loading conditions. The pressure, film strain energy, tendon strain energy, and gravitational energy are all considered in this analysis. The surface accuracy of the antenna reflector is measured by an RMS calculation while the reflector phase error component of the efficiency is determined by computing the power density at boresight. Our error computation methods are tailored for the faceted surface of our model and they are more accurate for this particular problem than the commonly applied Ruze Equation. Previous analytical work on parabolic antennas focused on axisymmetric geometries and loads. Symmetric equilibria are not assumed in our analysis. In addition, this dissertation contains two principle original findings: (1) the typical supporting tendon system tends to flatten a parabolic reflector near its edge. We find that surface accuracy can be significantly improved by fixing the edge of the inflated reflector to a rigid structure; (2) for large membranes assembled from flat sheets of thin material, we demonstrate that the surface accuracy of the resulting inflated membrane reflector can be improved by altering the cutting pattern of the flat components. Our findings demonstrate that the proper choice of design parameters can increase the performance of inflatable antennas, opening up new antenna applications where higher resolution and greater sensitivity are desired. These include space applications involving high data rates and high bandwidths, such as lunar surface wireless local networks and orbiting relay satellites. A light-weight inflatable antenna is also an ideal component in aerostat, airship and free balloon systems that supports communication, surveillance and remote sensing applications.
Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J
2014-01-01
Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808
The selection of the optimal baseline in the front-view monocular vision system
NASA Astrophysics Data System (ADS)
Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.
ERIC Educational Resources Information Center
Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria
2014-01-01
The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…
Predicting risky choices from brain activity patterns
Helfinstein, Sarah M.; Schonberg, Tom; Congdon, Eliza; Karlsgodt, Katherine H.; Mumford, Jeanette A.; Sabb, Fred W.; Cannon, Tyrone D.; London, Edythe D.; Bilder, Robert M.; Poldrack, Russell A.
2014-01-01
Previous research has implicated a large network of brain regions in the processing of risk during decision making. However, it has not yet been determined if activity in these regions is predictive of choices on future risky decisions. Here, we examined functional MRI data from a large sample of healthy subjects performing a naturalistic risk-taking task and used a classification analysis approach to predict whether individuals would choose risky or safe options on upcoming trials. We were able to predict choice category successfully in 71.8% of cases. Searchlight analysis revealed a network of brain regions where activity patterns were reliably predictive of subsequent risk-taking behavior, including a number of regions known to play a role in control processes. Searchlights with significant predictive accuracy were primarily located in regions more active when preparing to avoid a risk than when preparing to engage in one, suggesting that risk taking may be due, in part, to a failure of the control systems necessary to initiate a safe choice. Additional analyses revealed that subject choice can be successfully predicted with minimal decrements in accuracy using highly condensed data, suggesting that information relevant for risky choice behavior is encoded in coarse global patterns of activation as well as within highly local activation within searchlights. PMID:24550270
NASA Astrophysics Data System (ADS)
McClain, Bobbi J.; Porter, William F.
2000-11-01
Satellite imagery is a useful tool for large-scale habitat analysis; however, its limitations need to be tested. We tested these limitations by varying the methods of a habitat evaluation for white-tailed deer ( Odocoileus virginianus) in the Adirondack Park, New York, USA, utilizing harvest data to create and validate the assessment models. We used two classified images, one with a large minimum mapping unit but high accuracy and one with no minimum mapping unit but slightly lower accuracy, to test the sensitivity of the evaluation to these differences. We tested the utility of two methods of assessment, habitat suitability index modeling, and pattern recognition modeling. We varied the scale at which the models were applied by using five separate sizes of analysis windows. Results showed that the presence of a large minimum mapping unit eliminates important details of the habitat. Window size is relatively unimportant if the data are averaged to a large resolution (i.e., township), but if the data are used at the smaller resolution, then the window size is an important consideration. In the Adirondacks, the proportion of hardwood and softwood in an area is most important to the spatial dynamics of deer populations. The low occurrence of open area in all parts of the park either limits the effect of this cover type on the population or limits our ability to detect the effect. The arrangement and interspersion of cover types were not significant to deer populations.
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
A Fast Projection-Based Algorithm for Clustering Big Data.
Wu, Yun; He, Zhiquan; Lin, Hao; Zheng, Yufei; Zhang, Jingfen; Xu, Dong
2018-06-07
With the fast development of various techniques, more and more data have been accumulated with the unique properties of large size (tall) and high dimension (wide). The era of big data is coming. How to understand and discover new knowledge from these data has attracted more and more scholars' attention and has become the most important task in data mining. As one of the most important techniques in data mining, clustering analysis, a kind of unsupervised learning, could group a set data into objectives(clusters) that are meaningful, useful, or both. Thus, the technique has played very important role in knowledge discovery in big data. However, when facing the large-sized and high-dimensional data, most of the current clustering methods exhibited poor computational efficiency and high requirement of computational source, which will prevent us from clarifying the intrinsic properties and discovering the new knowledge behind the data. Based on this consideration, we developed a powerful clustering method, called MUFOLD-CL. The principle of the method is to project the data points to the centroid, and then to measure the similarity between any two points by calculating their projections on the centroid. The proposed method could achieve linear time complexity with respect to the sample size. Comparison with K-Means method on very large data showed that our method could produce better accuracy and require less computational time, demonstrating that the MUFOLD-CL can serve as a valuable tool, at least may play a complementary role to other existing methods, for big data clustering. Further comparisons with state-of-the-art clustering methods on smaller datasets showed that our method was fastest and achieved comparable accuracy. For the convenience of most scholars, a free soft package was constructed.
Telemedicine and Diabetic Retinopathy: Review of Published Screening Programs
Tozer, Kevin; Woodward, Maria A.; Newman-Casey, Paula A.
2016-01-01
Background Diabetic Retinopathy (DR) is a leading cause of blindness worldwide even though successful treatments exist. Improving screening and treatment could avoid many cases of vision loss. However, due to an increasing prevalence of diabetes, traditional in-person screening for DR for every diabetic patient is not feasible. Telemedicine is one viable solution to provide high-quality and efficient screening to large number of diabetic patients. Purpose To provide a narrative review of large DR telemedicine screening programs. Methods Articles were identified through a comprehensive search of the English-language literature published between 2000 and 2014. Telemedicine screening programs were included for review if they had published data on at least 150 patients and had available validation studies supporting their model. Screening programs were then categorized according to their American Telemedicine Association Validation Level. Results Seven programs from the US and abroad were identified and included in the review. Three programs were Category 1 programs (Ophdiat, EyePacs, and Digiscope), two were Category 2 programs (Eye Check, NHS Diabetic Eye Screening Program), and two were Category 3 programs (Joslin Vision Network, Alberta Screening Program). No program was identified that claimed category 4 status. Programs ranged from community or city level programs to large nationwide programs including millions of individuals. The programs demonstrated a high level of clinical accuracy in screening for DR. There was no consensus amongst the programs regarding the need for dilation, need for stereoscopic images, or the level of training for approved image graders. Conclusion Telemedicine programs have been clinically validated and successfully implemented across the globe. They can provide a high-level of clinical accuracy for screening for DR while improving patient access in a cost-effective and scalable manner. PMID:27430019
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-14
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-01
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829
Estimating the reliability of eyewitness identifications from police lineups
Wixted, John T.; Mickes, Laura; Dunn, John C.; Clark, Steven E.; Wells, William
2016-01-01
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure. PMID:26699467
Estimating the reliability of eyewitness identifications from police lineups.
Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William
2016-01-12
Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.
Single photon detection and timing in the Lunar Laser Ranging Experiment.
NASA Technical Reports Server (NTRS)
Poultney, S. K.
1972-01-01
The goals of the Lunar Laser Ranging Experiment lead to the need for the measurement of a 2.5 sec time interval to an accuracy of a nanosecond or better. The systems analysis which included practical retroreflector arrays, available laser systems, and large telescopes led to the necessity of single photon detection. Operation under all background illumination conditions required auxiliary range gates and extremely narrow spectral and spatial filters in addition to the effective gate provided by the time resolution. Nanosecond timing precision at relatively high detection efficiency was obtained using the RCA C31000F photomultiplier and Ortec 270 constant fraction of pulse-height timing discriminator. The timing accuracy over the 2.5 sec interval was obtained using a digital interval with analog vernier ends. Both precision and accuracy are currently checked internally using a triggerable, nanosecond light pulser. Future measurements using sub-nanosecond laser pulses will be limited by the time resolution of single photon detectors.
NASA Astrophysics Data System (ADS)
Liu, Peng; Yang, Yong-qing; Li, Zhi-guo; Han, Jun-feng; Wei, Yu; Jing, Feng
2018-02-01
Aiming at the shortage of the incremental encoder with simple process to change along the count "in the presence of repeatability and anti disturbance ability, combined with its application in a large project in the country, designed an electromechanical switch for generating zero, zero crossing signal. A mechanical zero electric and zero coordinate transformation model is given to meet the path optimality, single, fast and accurate requirements of adaptive fast change algorithm, the proposed algorithm can effectively solve the contradiction between the accuracy and the change of the time change. A test platform is built to verify the effectiveness and robustness of the proposed algorithm. The experimental data show that the effect of the algorithm accuracy is not influenced by the change of the speed of change, change the error of only 0.0013. Meet too fast, the change of system accuracy, and repeated experiments show that this algorithm has high robustness.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
NASA Astrophysics Data System (ADS)
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
High-accuracy microassembly by intelligent vision systems and smart sensor integration
NASA Astrophysics Data System (ADS)
Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael
2003-10-01
Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.
Correction of Near-infrared High-resolution Spectra for Telluric Absorption at 0.90–1.35 μm
NASA Astrophysics Data System (ADS)
Sameshima, Hiroaki; Matsunaga, Noriyuki; Kobayashi, Naoto; Kawakita, Hideyo; Hamano, Satoshi; Ikeda, Yuji; Kondo, Sohei; Fukue, Kei; Taniguchi, Daisuke; Mizumoto, Misaki; Arai, Akira; Otsubo, Shogo; Takenaka, Keiichi; Watase, Ayaka; Asano, Akira; Yasui, Chikako; Izumi, Natsuko; Yoshikawa, Tomohiro
2018-07-01
We report a method of correcting a near-infrared (0.90–1.35 μm) high-resolution (λ/Δλ ∼ 28,000) spectrum for telluric absorption using the corresponding spectrum of a telluric standard star. The proposed method uses an A0 V star or its analog as a standard star from which on the order of 100 intrinsic stellar lines are carefully removed with the help of a reference synthetic telluric spectrum. We find that this method can also be applied to feature-rich objects having spectra with heavily blended intrinsic stellar and telluric lines and present an application to a G-type giant using this approach. We also develop a new diagnostic method for evaluating the accuracy of telluric correction and use it to demonstrate that our method achieves an accuracy better than 2% for spectral parts for which the atmospheric transmittance is as low as ∼20% if telluric standard stars are observed under the following conditions: (1) the difference in airmass between the target and the standard is ≲ 0.05; and (2) that in time is less than 1 hr. In particular, the time variability of water vapor has a large impact on the accuracy of telluric correction and minimizing the difference in time from that of the telluric standard star is important especially in near-infrared high-resolution spectroscopic observation.
NASA Astrophysics Data System (ADS)
Carroll, Lewis
2014-02-01
We are developing a new dose calibrator for nuclear pharmacies that can measure radioactivity in a vial or syringe without handling it directly or removing it from its transport shield “pig”. The calibrator's detector comprises twin opposing scintillating crystals coupled to Si photodiodes and current-amplifying trans-resistance amplifiers. Such a scheme is inherently linear with respect to dose rate over a wide range of radiation intensities, but accuracy at low activity levels may be impaired, beyond the effects of meager photon statistics, by baseline fluctuation and drift inevitably present in high-gain, current-mode photodiode amplifiers. The work described here is motivated by our desire to enhance accuracy at low excitations while maintaining linearity at high excitations. Thus, we are also evaluating a novel “pulse-mode” analog signal processing scheme that employs a linear threshold discriminator to virtually eliminate baseline fluctuation and drift. We will show the results of a side-by-side comparison of current-mode versus pulse-mode signal processing schemes, including perturbing factors affecting linearity and accuracy at very low and very high excitations. Bench testing over a wide range of excitations is done using a Poisson random pulse generator plus an LED light source to simulate excitations up to ˜106 detected counts per second without the need to handle and store large amounts of radioactive material.
Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N
2016-06-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Doré, B.P.; Meksin, R.; Mather, M.; Hirst, W.; Ochsner, K.N
2016-01-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting 1) the overall intensity of their future negative emotion, and 2) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. PMID:27100309
Performance of the fiber-optic low-coherent ground settlement sensor: From lab to field
NASA Astrophysics Data System (ADS)
Guo, Jingjing; Tan, Yanbin; Peng, Li; Chen, Jisong; Wei, Chuanjun; Zhang, Pinglei; Zhang, Tianhang; Alrabeei, Salah; Zhang, Zhe; Sun, Changsen
2018-04-01
A fiber-optic low-coherent interferometry sensor was developed to measure the ground settlement (GS) in an accuracy of the micrometer. The sensor combined optical techniques with liquid-contained chambers that were hydraulically connected together at the bottom by using a water-filled tube. The liquid surface inside each chamber was at the same level initially. The optical interferometry was employed to read out the liquid level changes, which following the GS happened at the place where the chamber was put on and, thereby, the GS information was calculated. The laboratory effort had demonstrated its potential in the practical application. Here, the denoising algorithms on the measurement signal were carried out based on the specific environment to ensure the accuracy and stability of the system in field applications. After that, we extended this technique to the high-speed railway. The 5-days continuous measurement proved that the designed system could be applied to monitor the GS of the high-speed railway piers and approached an accuracy of ±70 μm in the field situation with a reference compensation sensor. So the performance of the sensor was suitable to the GS monitoring problem in the high-speed railway. There, the difficulties were to meet the monitoring requirement of both a large span in space and its quite tiny and slow changes.
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels
NASA Astrophysics Data System (ADS)
Fang, Haiping; Wang, Zuowei; Lin, Zhifang; Liu, Muren
2002-05-01
A lattice Boltzmann method for simulating the viscous flow in large distensible blood vessels is presented by introducing a boundary condition for elastic and moving boundaries. The mass conservation for the boundary condition is tested in detail. The viscous flow in elastic vessels is simulated with a pressure-radius relationship similar to that of the pulmonary blood vessels. The numerical results for steady flow agree with the analytical prediction to very high accuracy, and the simulation results for pulsatile flow are comparable with those of the aortic flows observed experimentally. The model is expected to find many applications for studying blood flows in large distensible arteries, especially in those suffering from atherosclerosis, stenosis, aneurysm, etc.
Rastas, Pasi; Calboli, Federico C. F.; Guo, Baocheng; Shikano, Takahito; Merilä, Juha
2016-01-01
High-density linkage maps are important tools for genome biology and evolutionary genetics by quantifying the extent of recombination, linkage disequilibrium, and chromosomal rearrangements across chromosomes, sexes, and populations. They provide one of the best ways to validate and refine de novo genome assemblies, with the power to identify errors in assemblies increasing with marker density. However, assembly of high-density linkage maps is still challenging due to software limitations. We describe Lep-MAP2, a software for ultradense genome-wide linkage map construction. Lep-MAP2 can handle various family structures and can account for achiasmatic meiosis to gain linkage map accuracy. Simulations show that Lep-MAP2 outperforms other available mapping software both in computational efficiency and accuracy. When applied to two large F2-generation recombinant crosses between two nine-spined stickleback (Pungitius pungitius) populations, it produced two high-density (∼6 markers/cM) linkage maps containing 18,691 and 20,054 single nucleotide polymorphisms. The two maps showed a high degree of synteny, but female maps were 1.5–2 times longer than male maps in all linkage groups, suggesting genome-wide recombination suppression in males. Comparison with the genome sequence of the three-spined stickleback (Gasterosteus aculeatus) revealed a high degree of interspecific synteny with a low frequency (<5%) of interchromosomal rearrangements. However, a fairly large (ca. 10 Mb) translocation from autosome to sex chromosome was detected in both maps. These results illustrate the utility and novel features of Lep-MAP2 in assembling high-density linkage maps, and their usefulness in revealing evolutionarily interesting properties of genomes, such as strong genome-wide sex bias in recombination rates. PMID:26668116
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Improved navigation by combining VOR/DME information with air or inertial data
NASA Technical Reports Server (NTRS)
Bobick, J. C.; Bryson, A. E., Jr.
1972-01-01
The improvement was determined in navigational accuracy obtainable by combining VOR/DME information (from one or two stations) with air data (airspeed and heading) or with data from an inertial navigation system (INS) by means of a maximum-likelihood filter. It was found that the addition of air data to the information from one VOR/DME station reduces the RMS position error by a factor of about 2, whereas the addition of inertial data from a low-quality INS reduces the RMS position error by a factor of about 3. The use of information from two VOR/DME stations with air or inertial data yields large factors of improvement in RMS position accuracy over the use of a single VOR/DME station, roughly 15 to 20 for the air-data case and 25 to 35 for the inertial-data case. As far as position accuracy is concerned, at most one VOR station need be used. When continuously updating an INS with VOR/DME information, the use of a high-quality INS (0.01 deg/hr gyro drift) instead of a low-quality INS (1.0 deg/hr gyro drift) does not substantially improve position accuracy.
Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation.
Azmi, Reza; Pishgoo, Boshra; Norozi, Narges; Yeganeh, Samira
2013-04-01
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
NASA Astrophysics Data System (ADS)
Ikegawa, Shinichi; Horinouchi, Takeshi
2016-06-01
Accurate wind observation is a key to study atmospheric dynamics. A new automated cloud tracking method for the dayside of Venus is proposed and evaluated by using the ultraviolet images obtained by the Venus Monitoring Camera onboard the Venus Express orbiter. It uses multiple images obtained successively over a few hours. Cross-correlations are computed from the pair combinations of the images and are superposed to identify cloud advection. It is shown that the superposition improves the accuracy of velocity estimation and significantly reduces false pattern matches that cause large errors. Two methods to evaluate the accuracy of each of the obtained cloud motion vectors are proposed. One relies on the confidence bounds of cross-correlation with consideration of anisotropic cloud morphology. The other relies on the comparison of two independent estimations obtained by separating the successive images into two groups. The two evaluations can be combined to screen the results. It is shown that the accuracy of the screened vectors are very high to the equatorward of 30 degree, while it is relatively low at higher latitudes. Analysis of them supports the previously reported existence of day-to-day large-scale variability at the cloud deck of Venus, and it further suggests smaller-scale features. The product of this study is expected to advance the dynamics of venusian atmosphere.
Predict or classify: The deceptive role of time-locking in brain signal classification
NASA Astrophysics Data System (ADS)
Rusconi, Marco; Valleriani, Angelo
2016-06-01
Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.
A novel potential/viscous flow coupling technique for computing helicopter flow fields
NASA Technical Reports Server (NTRS)
Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul
1993-01-01
The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
Strong Gravitational Lensing with LSST
NASA Astrophysics Data System (ADS)
Marshall, Philip J.; Bradac, M.; Chartas, G.; Dobler, G.; Eliasdottir, A.; Falco, E.; Fassnacht, C. D.; Jee, M. J.; Keeton, C. R.; Oguri, M.; Tyson, J. A.; LSST Strong Lensing Science Collaboration
2010-01-01
LSST will find more strong gravitational lensing events than any other survey preceding it, and will monitor them all at a cadence of a few days to a few weeks. We can expect the biggest advances in strong lensing science made with LSST to be in those areas that benefit most from the large volume, and the high accuracy multi-filter time series: studies of, and using, several thousand lensed quasars and several hundred supernovae. However, the high quality imaging will allow us to detect and measure large numbers of background galaxies multiply-imaged by galaxies, groups and clusters. In this poster we give an overview of the strong lensing science enabled by LSST, and highlight the particular associated technical challenges that will have to be faced when working with the survey.
Resonance and intercombination lines in Mg-like ions of atomic numbers Z = 13 – 92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santana, Juan A.; Trabert, Elmar
2015-02-05
While prominent lines of various Na-like ions have been measured with an accuracy of better than 100 ppm and corroborate equally accurate calculations, there have been remarkably large discrepancies between calculations for Mg-like ions of high atomic number. We present ab initio calculations using the multireference Moller-Plesset approach for Mg-like ions of atomic numbers Z = 13-92 and compare the results with other calculations of this isoelectronic sequence as well as with experimental data. Our results come very close to experiment (typically 100 ppm) over a wide range. Furthermore, data at high values of Z are sparse, which calls formore » further accurate measurements in this range where relativistic and QED effects are large.« less
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm 2 . The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm2. The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control. PMID:28442997
Accuracy Assessment of Recent Global Ocean Tide Models around Antarctica
NASA Astrophysics Data System (ADS)
Lei, J.; Li, F.; Zhang, S.; Ke, H.; Zhang, Q.; Li, W.
2017-09-01
Due to the coverage limitation of T/P-series altimeters, the lack of bathymetric data under large ice shelves, and the inaccurate definitions of coastlines and grounding lines, the accuracy of ocean tide models around Antarctica is poorer than those in deep oceans. Using tidal measurements from tide gauges, gravimetric data and GPS records, the accuracy of seven state-of-the-art global ocean tide models (DTU10, EOT11a, GOT4.8, FES2012, FES2014, HAMTIDE12, TPXO8) is assessed, as well as the most widely-used conventional model FES2004. Four regions (Antarctic Peninsula region, Amery ice shelf region, Filchner-Ronne ice shelf region and Ross ice shelf region) are separately reported. The standard deviations of eight main constituents between the selected models are large in polar regions, especially under the big ice shelves, suggesting that the uncertainty in these regions remain large. Comparisons with in situ tidal measurements show that the most accurate model is TPXO8, and all models show worst performance in Weddell sea and Filchner-Ronne ice shelf regions. The accuracy of tidal predictions around Antarctica is gradually improving.
Astronomical large Ge immersion grating by Canon
NASA Astrophysics Data System (ADS)
Sukegawa, Takashi; Suzuki, Takeshi; Kitamura, Tsuyoshi
2016-07-01
Immersion grating is a powerful optical device for thee infrared high-resolution spectroscope. Germanium (GGe) is the best material for a mid-infrared immersion grating because of Ge has very large reflective index (n=4.0). On the other hands, there is no practical Ge immersion grating under 5umm use. It was very difficult for a fragile IR crystal to manufacture a diffraction grating precisely. Our original free-forming machine has accuracy of a few nano-meter in positioning and stability. We already fabricated the large CdZnTe immersion grating. (Sukegawa et al. (2012), Ikeda et al. (2015)) Wee are developing Ge immersion grating that can be a good solution for high-resolution infrared spectroscopy with the large ground-based/space telescopes. We succeeded practical Ge immersion grating with the grooved area off 75mm (ruled direction) x 119mm (grove width) and the blaze angle of 75 degrees. Our astronomical large Ge immersion grating has the grooved area of 155mm (ruled direction) x 41mmm (groove width) and groove pitch off 91.74um. We also report optical performance of astronomical large Ge immersion grating with a metal coating on the diffraction surface.
High Resolution, Large Deformation 3D Traction Force Microscopy
López-Fagundo, Cristina; Reichner, Jonathan; Hoffman-Kim, Diane; Franck, Christian
2014-01-01
Traction Force Microscopy (TFM) is a powerful approach for quantifying cell-material interactions that over the last two decades has contributed significantly to our understanding of cellular mechanosensing and mechanotransduction. In addition, recent advances in three-dimensional (3D) imaging and traction force analysis (3D TFM) have highlighted the significance of the third dimension in influencing various cellular processes. Yet irrespective of dimensionality, almost all TFM approaches have relied on a linear elastic theory framework to calculate cell surface tractions. Here we present a new high resolution 3D TFM algorithm which utilizes a large deformation formulation to quantify cellular displacement fields with unprecedented resolution. The results feature some of the first experimental evidence that cells are indeed capable of exerting large material deformations, which require the formulation of a new theoretical TFM framework to accurately calculate the traction forces. Based on our previous 3D TFM technique, we reformulate our approach to accurately account for large material deformation and quantitatively contrast and compare both linear and large deformation frameworks as a function of the applied cell deformation. Particular attention is paid in estimating the accuracy penalty associated with utilizing a traditional linear elastic approach in the presence of large deformation gradients. PMID:24740435
Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C
2018-06-01
Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.
Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun
2018-05-01
To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Bourgeau-Chavez, Laura L.; Kowalski, Kurt P.; Carlson Mazur, Martha L.; Scarbrough, Kirk A.; Powell, Richard B.; Brooks, Colin N.; Huberty, Brian; Jenkins, Liza K.; Banda, Elizabeth C.; Galbraith, David M.; Laubach, Zachary M.; Riordan, Kevin
2013-01-01
The invasive variety of Phragmites australis (common reed) forms dense stands that can cause negative impacts on coastal Great Lakes wetlands including habitat degradation and reduced biological diversity. Early treatment is key to controlling Phragmites, therefore a map of the current distribution is needed. ALOS PALSAR imagery was used to produce the first basin-wide distribution map showing the extent of large, dense invasive Phragmites-dominated habitats in wetlands and other coastal ecosystems along the U.S. shore of the Great Lakes. PALSAR is a satellite imaging radar sensor that is sensitive to differences in plant biomass and inundation patterns, allowing for the detection and delineation of these tall (up to 5 m), high density, high biomass invasive Phragmites stands. Classification was based on multi-season ALOS PALSAR L-band (23 cm wavelength) HH and HV polarization data. Seasonal (spring, summer, and fall) datasets were used to improve discrimination of Phragmites by taking advantage of phenological changes in vegetation and inundation patterns over the seasons. Extensive field collections of training and randomly selected validation data were conducted in 2010–2011 to aid in mapping and for accuracy assessments. Overall basin-wide map accuracy was 87%, with 86% producer's accuracy and 43% user's accuracy for invasive Phragmites. The invasive Phragmites maps are being used to identify major environmental drivers of this invader's distribution, to assess areas vulnerable to new invasion, and to provide information to regional stakeholders through a decision support tool.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
NASA Astrophysics Data System (ADS)
Gravity Collaboration; Abuter, R.; Accardo, M.; Amorim, A.; Anugu, N.; Ávila, G.; Azouaoui, N.; Benisty, M.; Berger, J. P.; Blind, N.; Bonnet, H.; Bourget, P.; Brandner, W.; Brast, R.; Buron, A.; Burtscher, L.; Cassaing, F.; Chapron, F.; Choquet, É.; Clénet, Y.; Collin, C.; Coudé Du Foresto, V.; de Wit, W.; de Zeeuw, P. T.; Deen, C.; Delplancke-Ströbele, F.; Dembet, R.; Derie, F.; Dexter, J.; Duvert, G.; Ebert, M.; Eckart, A.; Eisenhauer, F.; Esselborn, M.; Fédou, P.; Finger, G.; Garcia, P.; Garcia Dabo, C. E.; Garcia Lopez, R.; Gendron, E.; Genzel, R.; Gillessen, S.; Gonte, F.; Gordo, P.; Grould, M.; Grözinger, U.; Guieu, S.; Haguenauer, P.; Hans, O.; Haubois, X.; Haug, M.; Haussmann, F.; Henning, Th.; Hippler, S.; Horrobin, M.; Huber, A.; Hubert, Z.; Hubin, N.; Hummel, C. A.; Jakob, G.; Janssen, A.; Jochum, L.; Jocou, L.; Kaufer, A.; Kellner, S.; Kendrew, S.; Kern, L.; Kervella, P.; Kiekebusch, M.; Klein, R.; Kok, Y.; Kolb, J.; Kulas, M.; Lacour, S.; Lapeyrère, V.; Lazareff, B.; Le Bouquin, J.-B.; Lèna, P.; Lenzen, R.; Lévêque, S.; Lippa, M.; Magnard, Y.; Mehrgan, L.; Mellein, M.; Mérand, A.; Moreno-Ventas, J.; Moulin, T.; Müller, E.; Müller, F.; Neumann, U.; Oberti, S.; Ott, T.; Pallanca, L.; Panduro, J.; Pasquini, L.; Paumard, T.; Percheron, I.; Perraut, K.; Perrin, G.; Pflüger, A.; Pfuhl, O.; Phan Duc, T.; Plewa, P. M.; Popovic, D.; Rabien, S.; Ramírez, A.; Ramos, J.; Rau, C.; Riquelme, M.; Rohloff, R.-R.; Rousset, G.; Sanchez-Bermudez, J.; Scheithauer, S.; Schöller, M.; Schuhler, N.; Spyromilio, J.; Straubmeier, C.; Sturm, E.; Suarez, M.; Tristram, K. R. W.; Ventura, N.; Vincent, F.; Waisberg, I.; Wank, I.; Weber, J.; Wieprecht, E.; Wiest, M.; Wiezorrek, E.; Wittkowski, M.; Woillez, J.; Wolff, B.; Yazici, S.; Ziegler, D.; Zins, G.
2017-06-01
GRAVITY is a new instrument to coherently combine the light of the European Southern Observatory Very Large Telescope Interferometer to form a telescope with an equivalent 130 m diameter angular resolution and a collecting area of 200 m2. The instrument comprises fiber fed integrated optics beam combination, high resolution spectroscopy, built-in beam analysis and control, near-infrared wavefront sensing, phase-tracking, dual-beam operation, and laser metrology. GRAVITY opens up to optical/infrared interferometry the techniques of phase referenced imaging and narrow angle astrometry, in many aspects following the concepts of radio interferometry. This article gives an overview of GRAVITY and reports on the performance and the first astronomical observations during commissioning in 2015/16. We demonstrate phase-tracking on stars as faint as mK ≈ 10 mag, phase-referenced interferometry of objects fainter than mK ≈ 15 mag with a limiting magnitude of mK ≈ 17 mag, minute long coherent integrations, a visibility accuracy of better than 0.25%, and spectro-differential phase and closure phase accuracy better than 0.5°, corresponding to a differential astrometric precision of better than ten microarcseconds (μas). The dual-beam astrometry, measuring the phase difference of two objects with laser metrology, is still under commissioning. First observations show residuals as low as 50 μas when following objects over several months. We illustrate the instrument performance with the observations of archetypical objects for the different instrument modes. Examples include the Galactic center supermassive black hole and its fast orbiting star S2 for phase referenced dual-beam observations and infrared wavefront sensing, the high mass X-ray binary BP Cru and the active galactic nucleus of PDS 456 for a few μas spectro-differential astrometry, the T Tauri star S CrA for a spectro-differential visibility analysis, ξ Tel and 24 Cap for high accuracy visibility observations, and η Car for interferometric imaging with GRAVITY.
NASA Astrophysics Data System (ADS)
Adams, Marc S.; Bühler, Yves; Fromm, Reinhard
2017-12-01
Reliable and timely information on the spatio-temporal distribution of snow in alpine terrain plays an important role for a wide range of applications. Unmanned aerial system (UAS) photogrammetry is increasingly applied to cost-efficiently map the snow depth at very high resolution with flexible applicability. However, crucial questions regarding quality and repeatability of this technique are still under discussion. Here we present a multitemporal accuracy and precision assessment of UAS photogrammetry for snow depth mapping on the slope-scale. We mapped a 0.12 km2 large snow-covered study site, located in a high-alpine valley in Western Austria. 12 UAS flights were performed to acquire imagery at 0.05 m ground sampling distance in visible (VIS) and near-infrared (NIR) wavelengths with a modified commercial, off-the-shelf sensor mounted on a custom-built fixed-wing UAS. The imagery was processed with structure-from-motion photogrammetry software to generate orthophotos, digital surface models (DSMs) and snow depth maps (SDMs). Accuracy of DSMs and SDMs were assessed with terrestrial laser scanning and manual snow depth probing, respectively. The results show that under good illumination conditions (study site in full sunlight), the DSMs and SDMs were acquired with an accuracy of ≤ 0.25 and ≤ 0.29 m (both at 1σ), respectively. In case of poorly illuminated snow surfaces (study site shadowed), the NIR imagery provided higher accuracy (0.19 m; 0.23 m) than VIS imagery (0.49 m; 0.37 m). The precision of the UASSDMs was 0.04 m for a small, stable area and below 0.33 m for the whole study site (both at 1σ).
Coarse climate change projections for species living in a fine-scaled world.
Nadeau, Christopher P; Urban, Mark C; Bridle, Jon R
2017-01-01
Accurately predicting biological impacts of climate change is necessary to guide policy. However, the resolution of climate data could be affecting the accuracy of climate change impact assessments. Here, we review the spatial and temporal resolution of climate data used in impact assessments and demonstrate that these resolutions are often too coarse relative to biologically relevant scales. We then develop a framework that partitions climate into three important components: trend, variance, and autocorrelation. We apply this framework to map different global climate regimes and identify where coarse climate data is most and least likely to reduce the accuracy of impact assessments. We show that impact assessments for many large mammals and birds use climate data with a spatial resolution similar to the biologically relevant area encompassing population dynamics. Conversely, impact assessments for many small mammals, herpetofauna, and plants use climate data with a spatial resolution that is orders of magnitude larger than the area encompassing population dynamics. Most impact assessments also use climate data with a coarse temporal resolution. We suggest that climate data with a coarse spatial resolution is likely to reduce the accuracy of impact assessments the most in climates with high spatial trend and variance (e.g., much of western North and South America) and the least in climates with low spatial trend and variance (e.g., the Great Plains of the USA). Climate data with a coarse temporal resolution is likely to reduce the accuracy of impact assessments the most in the northern half of the northern hemisphere where temporal climatic variance is high. Our framework provides one way to identify where improving the resolution of climate data will have the largest impact on the accuracy of biological predictions under climate change. © 2016 John Wiley & Sons Ltd.
Very large database of lipids: rationale and design.
Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R
2013-11-01
Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
Research on bearing fault diagnosis of large machinery based on mathematical morphology
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.
NASA Technical Reports Server (NTRS)
Kolesar, C. E.
1987-01-01
Research activity on an airfoil designed for a large airplane capable of very long endurance times at a low Mach number of 0.22 is examined. Airplane mission objectives and design optimization resulted in requirements for a very high design lift coefficient and a large amount of laminar flow at high Reynolds number to increase the lift/drag ratio and reduce the loiter lift coefficient. Natural laminar flow was selected instead of distributed mechanical suction for the measurement technique. A design lift coefficient of 1.5 was identified as the highest which could be achieved with a large extent of laminar flow. A single element airfoil was designed using an inverse boundary layer solution and inverse airfoil design computer codes to create an airfoil section that would achieve performance goals. The design process and results, including airfoil shape, pressure distributions, and aerodynamic characteristics are presented. A two dimensional wind tunnel model was constructed and tested in a NASA Low Turbulence Pressure Tunnel which enabled testing at full scale design Reynolds number. A comparison is made between theoretical and measured results to establish accuracy and quality of the airfoil design technique.
Large Aperture "Photon Bucket" Optical Receiver Performance in High Background Environments
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.; Hoppe, D.
2011-01-01
The potential development of large aperture groundbased "photon bucket" optical receivers for deep space communications, with acceptable performance even when pointing close to the sun, is receiving considerable attention. Sunlight scattered by the atmosphere becomes significant at micron wavelengths when pointing to a few degrees from the sun, even with the narrowest bandwidth optical filters. In addition, high quality optical apertures in the 10-30 meter range are costly and difficult to build with accurate surfaces to ensure narrow fields-of-view (FOV). One approach currently under consideration is to polish the aluminum reflector panels of large 34-meter microwave antennas to high reflectance, and accept the relatively large FOV generated by state-of-the-art polished aluminum panels with rms surface accuracies on the order of a few microns, corresponding to several-hundred micro-radian FOV, hence generating centimeter-diameter focused spots at the Cassegrain focus of 34-meter antennas. Assuming pulse-position modulation (PPM) and Poisson-distributed photon-counting detection, a "polished panel" photon-bucket receiver with large FOV will collect hundreds of background photons per PPM slot, along with comparable signal photons due to its large aperture. It is demonstrated that communications performance in terms of PPM symbol-error probability in high-background high-signal environments depends more strongly on signal than on background photons, implying that large increases in background energy can be compensated by a disproportionally small increase in signal energy. This surprising result suggests that large optical apertures with relatively poor surface quality may nevertheless provide acceptable performance for deep-space optical communications, potentially enabling the construction of cost-effective hybrid RF/optical receivers in the future.
Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A
2016-05-08
The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance
Marchal, Sophie; Bregeras, Olivier; Puaux, Didier; Gervais, Rémi; Ferry, Barbara
2016-01-01
Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs’ greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately. PMID:26863620
NASA Astrophysics Data System (ADS)
Uijlenhoet, R.; de Vos, L. W.; Leijnse, H.; Overeem, A.; Raupach, T. H.; Berne, A.
2017-12-01
For the purpose of urban rainfall monitoring high resolution rainfall measurements are desirable. Typically C-band radar can provide rainfall intensities at km grid cells every 5 minutes. Opportunistic sensing with commercial microwave links yields rainfall intensities over link paths within cities. Additionally, recent developments have made it possible to obtain large amounts of urban in situ measurements from weather amateurs in near real-time. With a known high resolution simulated rainfall event the accuracy of these three techniques is evaluated, taking into account their respective existing layouts and sampling methods. Under ideal measurement conditions, the weather station networks proves to be most promising. For accurate estimation with radar, an appropriate choice for Z-R relationship is vital. Though both the microwave links and the weather station networks are quite dense, both techniques will underestimate rainfall if not at least one link path / station captures the high intensity rainfall peak. The accuracy of each technique improves when considering rainfall at larger scales, especially by increasing time intervals, with the steepest improvements found in microwave links.
Characteristics of high-purity Teflon vial for 14C measurement in old tree rings
NASA Astrophysics Data System (ADS)
Sakurai, H.; Saswaki, Y.; Matsumoto, T.; Aoki, T.; Kato, W.; Gandou, T.; Gunji, S.; Tokanai, F.
2003-06-01
14C concentration in single-year tree rings of an old cedar of ca. 2500 years ago is measured to investigate the 11-yr periodicity of solar activity. Our highly accurate 14C measuring system is composed of a benzene synthesizer capable of producing a large quantity (10 g) of benzene and a Quantulus 1220™ liquid scintillation counting system. The accuracy is less than 0.2% for measurements of 14C concentration. The benzene sample is contained in a high-purity Teflon/copper-counting vial (20 ml) manufactured by Wallac Oy Company. We found a vial with an irregular copper cap for the measurements of 11 tree rings. The behavior of the vial with the irregular cap was investigated. The Teflon sheet inside the cap plays an important role in achieving stable measurement. The rate of volatilization of the benzene was less than 0.35 mg/day for vials with ordinary caps. This results in the volatilization rate of 0.003% for 10.5 g of benzene and hence guarantees measurement at an accuracy of 0.2% for 70 days.
ExoMol molecular line lists XIX: high-accuracy computed hot line lists for H218O and H217O
NASA Astrophysics Data System (ADS)
Polyansky, Oleg L.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan; Yurchenko, Sergei N.; Ovsyannikov, Roman I.; Zobov, Nikolai F.
2017-04-01
Hot line lists for two isotopologues of water, H218O and H217O, are presented. The calculations employ newly constructed potential energy surfaces (PES), which take advantage of a novel method for using the large set of experimental energy levels for H216O to give high-quality predictions for H218O and H217O. This procedure greatly extends the energy range for which a PES can be accurately determined, allowing an accurate prediction of higher lying energy levels than are currently known from direct laboratory measurements. This PES is combined with a high-accuracy, ab initio dipole moment surface of water in the computation of all energy levels, transition frequencies and associated Einstein A coefficients for states with rotational excitation up to J = 50 and energies up to 30 000 cm-1. The resulting HotWat78 line lists complement the well-used BT2 H216O line list. Full line lists are made available online as Supporting Information and at www.exomol.com.
Comprehensive model for predicting elemental composition of coal pyrolysis products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricahrds, Andrew P.; Shutt, Tim; Fletcher, Thomas H.
Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumptionmore » is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.« less
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Experiments study on attitude coupling control method for flexible spacecraft
NASA Astrophysics Data System (ADS)
Wang, Jie; Li, Dongxu
2018-06-01
High pointing accuracy and stabilization are significant for spacecrafts to carry out Earth observing, laser communication and space exploration missions. However, when a spacecraft undergoes large angle maneuver, the excited elastic oscillation of flexible appendages, for instance, solar wing and onboard antenna, would downgrade the performance of the spacecraft platform. This paper proposes a coupling control method, which synthesizes the adaptive sliding mode controller and the positive position feedback (PPF) controller, to control the attitude and suppress the elastic vibration simultaneously. Because of its prominent performance for attitude tracking and stabilization, the proposed method is capable of slewing the flexible spacecraft with a large angle. Also, the method is robust to parametric uncertainties of the spacecraft model. Numerical simulations are carried out with a hub-plate system which undergoes a single-axis attitude maneuver. An attitude control testbed for the flexible spacecraft is established and experiments are conducted to validate the coupling control method. Both numerical and experimental results demonstrate that the method discussed above can effectively decrease the stabilization time and improve the attitude accuracy of the flexible spacecraft.
Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies
Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.
2017-01-01
Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Timothy, J. Gethyn; Bybee, Richard L.
1986-01-01
The performance characteristics of multianode microchannel array (MAMA) detector systems which have formats as large as 256 x 1024 pixels and which have application to imaging and spectroscopy at UV wavelengths are evaluated. Sealed and open-structure MAMA detector tubes with opaque CsI photocathodes can determine the arrival time of the detected photon to an accuracy of 100 ns or better. Very large format MAMA detectors with CsI and Cs2Te photocathodes and active areas of 52 x 52 mm (2048 x 2048 pixels) will be used as the UV solar blind detectors for the NASA STIS.
Information retrieval from holographic interferograms: Fundamentals and problems
NASA Technical Reports Server (NTRS)
Vest, Charles M.
1987-01-01
Holographic interferograms can contain large amounts of information about flow and temperature fields. Their information content can be very high because they can be viewed from many different directions. This multidirectionality, and fringe localization add to the information contained in the fringe pattern if diffuse illumination is used. Additional information, and increased accuracy can be obtained through the use of dual reference wave holography to add reference fringes or to effect discrete phase shift or hetrodyne interferometry. Automated analysis of fringes is possible if interferograms are of simple structure and good quality. However, in practice a large number of practical problems can arise, so that a difficult image processing task results.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Large MOEMS diffraction grating results providing an EC-QCL wavelength scan of 20%
NASA Astrophysics Data System (ADS)
Grahmann, Jan; Merten, André; Herrmann, Andreas; Ostendorf, Ralf; Bleh, Daniela; Drabe, Christian; Kamenz, Jörg
2015-02-01
Experimental results of a large scanning grating with a diameter of 5mm and 1 kHz scan frequency are discussed. An optical diffraction grating is fabricated on a mirror single crystal silicon plate to scan the first diffraction order in the MIR-wavelength range over a quantum cascade laser facet. Special emphasis is on the development of the grating technology module to integrate it with high accuracy and reproducibility into the IPMS AME75 process flow. The principle EC-QCL setup with the scanning grating is described and first measurement results concerning laser output power and tuning range are presented.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
Verification measurements of the IRMM-1027 and the IAEA large-sized dried (LSD) spikes.
Jakopič, R; Aregbe, Y; Richter, S; Zuleger, E; Mialle, S; Balsley, S D; Repinc, U; Hiess, J
2017-01-01
In the frame of the accountancy measurements of the fissile materials, reliable determinations of the plutonium and uranium content in spent nuclear fuel are required to comply with international safeguards agreements. Large-sized dried (LSD) spikes of enriched 235 U and 239 Pu for isotope dilution mass spectrometry (IDMS) analysis are routinely applied in reprocessing plants for this purpose. A correct characterisation of these elements is a pre-requirement for achieving high accuracy in IDMS analyses. This paper will present the results of external verification measurements of such LSD spikes performed by the European Commission and the International Atomic Energy Agency.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Comparison of ANN and RKS approaches to model SCC strength
NASA Astrophysics Data System (ADS)
Prakash, Aravind J.; Sathyan, Dhanya; Anand, K. B.; Aravind, N. R.
2018-02-01
Self compacting concrete (SCC) is a high performance concrete that has high flowability and can be used in heavily reinforced concrete members with minimal compaction segregation and bleeding. The mix proportioning of SCC is highly complex and large number of trials are required to get the mix with the desired properties resulting in the wastage of materials and time. The research on SCC has been highly empirical and no theoretical relationships have been developed between the mixture proportioning and engineering properties of SCC. In this work effectiveness of artificial neural network (ANN) and random kitchen sink algorithm(RKS) with regularized least square algorithm(RLS) in predicting the split tensile strength of the SCC is analysed. Random kitchen sink algorithm is used for mapping data to higher dimension and classification of this data is done using Regularized least square algorithm. The training and testing data for the algorithm was obtained experimentally using standard test procedures and materials available. Total of 40 trials were done which were used as the training and testing data. Trials were performed by varying the amount of fine aggregate, coarse aggregate, dosage and type of super plasticizer and water. Prediction accuracy of the ANN and RKS model is checked by comparing the RMSE value of both ANN and RKS. Analysis shows that eventhough the RKS model is good for large data set, its prediction accuracy is as good as conventional prediction method like ANN so the split tensile strength model developed by RKS can be used in industries for the proportioning of SCC with tailor made property.
Extended behavioural modelling of FET and lattice-mismatched HEMT devices
NASA Astrophysics Data System (ADS)
Khawam, Yahya; Albasha, Lutfi
2017-07-01
This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.
Frassy, Federico; Candiani, Gabriele; Rusmini, Marco; Maianti, Pieralberto; Marchesi, Andrea; Nodari, Francesco Rota; Via, Giorgio Dalla; Albonico, Carlo; Gianinetto, Marco
2014-01-01
The World Health Organization estimates that 100 thousand people in the world die every year from asbestos-related cancers and more than 300 thousand European citizens are expected to die from asbestos-related mesothelioma by 2030. Both the European and the Italian legislations have banned the manufacture, importation, processing and distribution in commerce of asbestos-containing products and have recommended action plans for the safe removal of asbestos from public and private buildings. This paper describes the quantitative mapping of asbestos-cement covers over a large mountainous region of Italian Western Alps using the Multispectral Infrared and Visible Imaging Spectrometer sensor. A very large data set made up of 61 airborne transect strips covering 3263 km2 were processed to support the identification of buildings with asbestos-cement roofing, promoted by the Valle d'Aosta Autonomous Region with the support of the Regional Environmental Protection Agency. Results showed an overall mapping accuracy of 80%, in terms of asbestos-cement surface detected. The influence of topography on the classification's accuracy suggested that even in high relief landscapes, the spatial resolution of data is the major source of errors and the smaller asbestos-cement covers were not detected or misclassified. PMID:25166502
NASA Astrophysics Data System (ADS)
Hess, Laura Lorraine
The ability of synthetic aperture radar to detect flooding and vegetation structure was evaluated for three seasonally inundated floodplain sites supporting a broad variety of wetland and upland vegetation types: two reaches of the Solimoes floodplain in the central Amazon, and the Magela Creek floodplain in Northern Territory, Australia. For each site, C- and L-band polarimetric Shuttle Imaging Radar-C (SIR-C) data was obtained at both high- and low-water stages. Inundation status and vegetation structure were documented simultaneous with the SIR-C acquisitions using low-altitude videography and ground measurements. SIR-C images were classified into cover states defined by vegetation physiognomy and presence of standing water, using a decision-tree model with backscattering coefficients at HH, VV, and HV polarizations as input variables. Classification accuracy was assessed using user's accuracy, producer's accuracy, and kappa coefficient for a test population of pixels. At all sites, both C- and L-band were necessary to accurately classify cover types with two dates. HH polarization was most. useful for distinguishing flooded from non-flooded vegetation (C-HH for macrophyte versus pasture, L-HH for flooded versus non-flooded forest), and cross-polarized L-band data provided the best separation between woody and non-woody vegetation. Increases in L-HH backscattering due to flooding were on the order of 3--4 dB for closed-canopy varzea and igapo forest, and 4--7 dB, for open Melaleuca woodland. The broad range of physiognomies and stand structures found in both herbaceous and woody wetland communities, combined with the variation in the amount of emergent canopy caused by water level fluctuations and phenologic changes, resulted in a large range in backscattering characteristics of wetland communities both within and between sites. High accuracies cannot be achieved for these communities using single-date, single-band, single-polarization data, particularly in the case of distinguishing flooded macrophyte from non-flooded forest vegetation. However, the large changes in backscattering caused by flooding make it possible to achieve good accuracies (>85%) using multi-temporal data. Where river stage records are available, SAR-based maps of inundation status on a series of dates can be linked to long-term stage data to define wetland habitat types based on flooding regime and low-water vegetation cover.
High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography
2015-08-03
Navier-Stokes equation, in effect randomizing the subfilter-scale (SFS) stress divergence. In the intervening years it has been discovered that this...surface stress models do introduce spurious effects that force deviations from LOTW at the first couple grid levels adjacent to the surface. Fig. 10 shows...SFS stress is sufficiently overwhelming to produce the overshoot. When the LES is moved into the HAZ so that the viscous effects causing the
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J.; Paulsen, Jane S.; Miller, Michael I.
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans. PMID:29867332
Tang, Xiaoying; Luo, Yuan; Chen, Zhibin; Huang, Nianwei; Johnson, Hans J; Paulsen, Jane S; Miller, Michael I
2018-01-01
In this paper, we present a fully-automated subcortical and ventricular shape generation pipeline that acts on structural magnetic resonance images (MRIs) of the human brain. Principally, the proposed pipeline consists of three steps: (1) automated structure segmentation using the diffeomorphic multi-atlas likelihood-fusion algorithm; (2) study-specific shape template creation based on the Delaunay triangulation; (3) deformation-based shape filtering using the large deformation diffeomorphic metric mapping for surfaces. The proposed pipeline is shown to provide high accuracy, sufficient smoothness, and accurate anatomical topology. Two datasets focused upon Huntington's disease (HD) were used for evaluating the performance of the proposed pipeline. The first of these contains a total of 16 MRI scans, each with a gold standard available, on which the proposed pipeline's outputs were observed to be highly accurate and smooth when compared with the gold standard. Visual examinations and outlier analyses on the second dataset, which contains a total of 1,445 MRI scans, revealed 100% success rates for the putamen, the thalamus, the globus pallidus, the amygdala, and the lateral ventricle in both hemispheres and rates no smaller than 97% for the bilateral hippocampus and caudate. Another independent dataset, consisting of 15 atlas images and 20 testing images, was also used to quantitatively evaluate the proposed pipeline, with high accuracy having been obtained. In short, the proposed pipeline is herein demonstrated to be effective, both quantitatively and qualitatively, using a large collection of MRI scans.
Influence of cue word perceptual information on metamemory accuracy in judgement of learning.
Hu, Xiao; Liu, Zhaomin; Li, Tongtong; Luo, Liang
2016-01-01
Previous studies have suggested that perceptual information regarding to-be-remembered words in the study phase affects the accuracy of judgement of learning (JOL). However, few have investigated whether the perceptual information in the JOL phase influences JOL accuracy. This study examined the influence of cue word perceptual information in the JOL phase on immediate and delayed JOL accuracy through changes in cue word font size. In Experiment 1, large-cue word pairs had significantly higher mean JOL magnitude than small-cue word pairs in immediate JOLs and higher relative accuracy than small-cue pairs in delayed JOLs, but font size had no influence on recall performance. Experiment 2 increased the JOL time, and mean JOL magnitude did not reliably differ for large-cue compared with small-cue pairs in immediate JOLs. However, the influence on relative accuracy still existed in delayed JOLs. Experiment 3 increased the familiarity of small-cue words in the delayed JOL phase by adding a lexical decision task. The results indicated that cue word font size no longer affected relative accuracy in delayed JOLs. The three experiments in our study indicated that the perceptual information regarding cue words in the JOL phase affects immediate and delayed JOLs in different ways.
Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.
Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter
2016-01-01
We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.
SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, C; Chan, S; Lee, F
Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done withmore » FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.« less
Efficient and self-adaptive in-situ learning in multilayer memristor neural networks.
Li, Can; Belkin, Daniel; Li, Yunning; Yan, Peng; Hu, Miao; Ge, Ning; Jiang, Hao; Montgomery, Eric; Lin, Peng; Wang, Zhongrui; Song, Wenhao; Strachan, John Paul; Barnell, Mark; Wu, Qing; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei
2018-06-19
Memristors with tunable resistance states are emerging building blocks of artificial neural networks. However, in situ learning on a large-scale multiple-layer memristor network has yet to be demonstrated because of challenges in device property engineering and circuit integration. Here we monolithically integrate hafnium oxide-based memristors with a foundry-made transistor array into a multiple-layer neural network. We experimentally demonstrate in situ learning capability and achieve competitive classification accuracy on a standard machine learning dataset, which further confirms that the training algorithm allows the network to adapt to hardware imperfections. Our simulation using the experimental parameters suggests that a larger network would further increase the classification accuracy. The memristor neural network is a promising hardware platform for artificial intelligence with high speed-energy efficiency.
Stability and accuracy of metamemory in adulthood and aging: a longitudinal analysis.
McDonald-Miszczak, L; Hertzog, C; Hultsch, D F
1995-12-01
The stability and accuracy of memory perceptions in 2 longitudinal samples was examined. Sample 1 consisted of 231 adults (22-78 years) tested twice over 2 years. Sample 2 consisted of 234 adults (55-86 years) tested 3 times over 6 years. Measures of perceived and actual memory change were obtained. A primary focus was whether perceptions of memory change stem from application of an implicit theory about aging and memory or from accurate monitoring of actual changes in performance. Individual differences in metamemory were highly stable over time. Results suggested at least some accurate monitoring of memory in Sample 2, in which actual change was greatest. However the overall pattern of results is largely consistent with predictions derived from an implicit theory hypothesis.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
Sanchez, Richard D.
2004-01-01
High-resolution airborne digital cameras with onboard data collection based on the Global Positioning System (GPS) and inertial navigation systems (INS) technology may offer a real-time means to gather accurate topographic map information by reducing ground control and eliminating aerial triangulation. Past evaluations of this integrated system over relatively flat terrain have proven successful. The author uses Emerge Digital Sensor System (DSS) combined with Applanix Corporation?s Position and Orientation Solutions for Direct Georeferencing to examine the positional mapping accuracy in rough terrain. The positional accuracy documented in this study did not meet large-scale mapping requirements owing to an apparent system mechanical failure. Nonetheless, the findings yield important information on a new approach for mapping in Antarctica and other remote or inaccessible areas of the world.
Random Bits Forest: a Strong Classifier/Regressor for Big Data
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li
2016-07-01
Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1993-01-01
A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.
Development and experiment of a broadband seismograph for deep exploration
NASA Astrophysics Data System (ADS)
Zhang, H.; Lin, J.; Yang, H.; Zheng, F.; Zhang, L.; Chen, Z.
2012-12-01
Seismic surveying is the most important type of deep exploration and oil-gas exploration. In order to obtain the high-quality deeper strata information in the deep exploration, large amount of drugs, large group interval and the low-frequency detector must be used, the length of the measuring line is usually tens of kilometers or even hundreds of kilometers. Conventional seismic exploration instrument generally do not have site storage function or limited storage capacity, due to the shackles of the transmission cable, the system bulky and difficult to handle, inefficient construction, high labor costs, collection capabilities and accuracy are the drawbacks of restrictions. This article describes a deep exploration of high-performance broadband seismograph. To ensure the quality of data acquisition, the 24-bit ADCs applied and the low noise analog front end circuit designed carefully, which enable the instrument noise level less than 1.5uV and the dynamic range over 120dB. Integrate dual-frequency GPS OEM board with the acquisition station. As a result, the acquisition station itself can make a static self-positioning and the horizontal accuracy can reach to centimeter-level. Furthermore, it can provide high accuracy position data to subsequent seismic data processing. Combine the precise timing system of GPS with digital clock that has high precision oven-controlled crystal oscillator (OCXO). It enables the accuracy of clock synchronization to reach 0.01ms and the stability of OCXO frequency reach 3e-8, which could solve the problems of synchronous triggering of the data acquisition unit of multiple recording units in the instrument and real-time calibration of the inaccuracy of system clock. The instrument uses a high-capacity (large than 16GB/station), high reliability of the seismic data storage solutions, which enables the instrument to record continuously for more than 138 hours at the sampling rate of 2000sps. Using low-power design techniques for power management in ether hardware or software, the average power consumption reached 2 watts, within a high-capacity lithium battery inside, the seismograph can work 80 hours continuously. With a internal 24-bit DAC and the FPGA control logic, a series of self-test items are achieved, including: noise level, the crosstalk between channels, common mode rejection ratio, harmonic distortion, detector impedance, impulse response, the gain calibration etc. Because the instrument Integrates a WIFI module inside, the instrument status and the quality of data acquisition can be real-time monitoring via a hand-held terminals. In order to verify the reliability and validity of the instrument, a deep seismic exploration research using the instruments provided in this article carried out in a certain area, 32 broadband seismograph were placed in the 120 km-long measure line (place one at intervals of about 4 km), to record the source signal far from a few hundred kilometers away. Experimental results show that performance of analog acquisition channels of the introduced instrument could reach the international advanced level. However, the non-cable designing makes the instrument get rid of the bulky cables and fulfill the target to lighten seismic instruments, which could definitely improve working efficiency, save surveying cost and be helpful to the work in the condition of complex geographical and geological environment.
Comparison of physical and semi-empirical hydraulic models for flood inundation mapping
NASA Astrophysics Data System (ADS)
Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.
2016-12-01
Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.
A deep learning and novelty detection framework for rapid phenotyping in high-content screening
Sommer, Christoph; Hoefler, Rudolf; Samwer, Matthias; Gerlich, Daniel W.
2017-01-01
Supervised machine learning is a powerful and widely used method for analyzing high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale screening data sets on nuclear and mitotic cell morphologies demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, which has broad implications for improved assay development in high-content screening. PMID:28954863
NASA Astrophysics Data System (ADS)
Poddaeva, O.; Churin, P.; Fedosova, A.; Truhanov, S.
2018-03-01
Studies of aerodynamics of bridge structures are an actual problem. Such attention is paid to the study of wind influence on bridge structures not at all by chance; a large number of cases of loss of stability of such structures are known under the influence of wind up to their complete destruction. The development of non-contact systems of measuring equipment allows solving this problem with a high level of accuracy and reliability. This article presents the results of experimental studies of wind impact on a two-span bridge using specialized measuring system based on high-precision laser displacement sensors.