ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
Space Station racks weight and CG measurement using the rack insertion end-effector
NASA Technical Reports Server (NTRS)
Brewer, William V.
1994-01-01
The objective was to design a method to measure weight and center of gravity (C.G.) location for Space Station Modules by adding sensors to the existing Rack Insertion End Effector (RIEE). Accomplishments included alternative sensor placement schemes organized into categories. Vendors were queried for suitable sensor equipment recommendations. Inverse mathematical models for each category determine expected maximum sensor loads. Sensors are selected using these computations, yielding cost and accuracy data. Accuracy data for individual sensors are inserted into forward mathematical models to estimate the accuracy of an overall sensor scheme. Cost of the schemes can be estimated. Ease of implementation and operation are discussed.
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
Diagnostic accuracy of MRI in the measurement of glenoid bone loss.
Gyftopoulos, Soterios; Hasan, Saqib; Bencardino, Jenny; Mayo, Jason; Nayyar, Samir; Babb, James; Jazrawi, Laith
2012-10-01
The purpose of this study is to assess the accuracy of MRI quantification of glenoid bone loss and to compare the diagnostic accuracy of MRI to CT in the measurement of glenoid bone loss. MRI, CT, and 3D CT examinations of 18 cadaveric glenoids were obtained after the creation of defects along the anterior and anteroinferior glenoid. The defects were measured by three readers separately and blindly using the circle method. These measurements were compared with measurements made on digital photographic images of the cadaveric glenoids. Paired sample Student t tests were used to compare the imaging modalities. Concordance correlation coefficients were also calculated to measure interobserver agreement. Our data show that MRI could be used to accurately measure glenoid bone loss with a small margin of error (mean, 3.44%; range, 2.06-5.94%) in estimated percentage loss. MRI accuracy was similar to that of both CT and 3D CT for glenoid loss measurements in our study for the readers familiar with the circle method, with 1.3% as the maximum expected difference in accuracy of the percentage bone loss between the different modalities (95% confidence). Glenoid bone loss can be accurately measured on MRI using the circle method. The MRI quantification of glenoid bone loss compares favorably to measurements obtained using 3D CT and CT. The accuracy of the measurements correlates with the level of training, and a learning curve is expected before mastering this technique.
Estimating discharge in rivers using remotely sensed hydraulic information
Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.
2005-01-01
A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.
Seo, Dong Gi; Choi, Jeongwook
2018-05-17
Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Improving receiver performance of diffusive molecular communication with enzymes.
Noel, Adam; Cheung, Karen C; Schober, Robert
2014-03-01
This paper studies the mitigation of intersymbol interference in a diffusive molecular communication system using enzymes that freely diffuse in the propagation environment. The enzymes form reaction intermediates with information molecules and then degrade them so that they cannot interfere with future transmissions. A lower bound expression on the expected number of molecules measured at the receiver is derived. A simple binary receiver detection scheme is proposed where the number of observed molecules is sampled at the time when the maximum number of molecules is expected. Insight is also provided into the selection of an appropriate bit interval. The expected bit error probability is derived as a function of the current and all previously transmitted bits. Simulation results show the accuracy of the bit error probability expression and the improvement in communication performance by having active enzymes present.
Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.
Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A
2016-01-01
In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P < .0001) different mean (SD) measurements: 1.49 (0.15) Nm and 3.73 (0.79) Nm. Use under drill power at controlled low velocity and at high velocity also resulted in significantly (P < .0001) different mean (SD) measurements: 1.47 (0.14) Nm and 5.37 (0.90) Nm. Maximum single measurement obtained was 9.0 Nm using drill power at high velocity. Locking screw insertion with improper technique may result in higher than expected torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect
NASA Astrophysics Data System (ADS)
Chao, Chia-Chun George
2009-03-01
The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei
2014-01-01
The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Ghodasra, Jason H; Wang, Dean; Jayakar, Rohit G; Jensen, Andrew R; Yamaguchi, Kent T; Hegde, Vishal V; Jones, Kristofer J
2018-01-01
To critically evaluate the quality, accuracy, and readability of readily available Internet patient resources for platelet-rich plasma (PRP) as a treatment modality for musculoskeletal injuries. Using the 3 most commonly used Internet search engines (Google, Bing, Yahoo), the search term "platelet rich plasma" was entered, and the first 50 websites from each search were reviewed. The website's affiliation was identified. Quality was evaluated using 25-point criteria based on guidelines published by the American Academy of Orthopaedic Surgeons, and accuracy was assessed with a previously described 12-point grading system by 3 reviewers independently. Readability was evaluated using the Flesch-Kincaid (FK) grade score. A total of 46 unique websites were identified and evaluated. The average quality and accuracy was 9.4 ± 3.4 (maximum 25) and 7.9 ± 2.3 (maximum 12), respectively. The average FK grade level was 12.6 ± 2.4, which is several grades higher than the recommended eighth-grade level for patient education material. Ninety-one percent (42/46) of websites were authored by physicians, and 9% (4/46) contained commercial bias. Mean quality was significantly greater in websites authored by health care providers (9.8 ± 3.1 vs 5.9 ± 4.7, P = .029) and in websites without commercial bias (9.9 ± 3.1 vs 4.5 ± 3.2, P = .002). Mean accuracy was significantly lower in websites authored by health care providers (7.6 ± 2.2 vs 11.0 ± 1.2, P = .004). Only 24% (11/46) reported that PRP remains an investigational treatment. The accuracy and quality of online patient resources for PRP are poor, and the information overestimates the reading ability of the general population. Websites authored by health care providers had higher quality but lower accuracy. Additionally, the majority of websites do not identify PRP as an experimental treatment, which may fail to provide appropriate patient understanding and expectations. Physicians should educate patients that many online patient resources have poor quality and accuracy and can be difficult to read. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Wittevrongel, Benjamin; Van Hulle, Marc M
2017-01-01
Brain-Computer Interfaces (BCIs) decode brain activity with the aim to establish a direct communication channel with an external device. Albeit they have been hailed to (re-)establish communication in persons suffering from severe motor- and/or communication disabilities, only recently BCI applications have been challenging other assistive technologies. Owing to their considerably increased performance and the advent of affordable technological solutions, BCI technology is expected to trigger a paradigm shift not only in assistive technology but also in the way we will interface with technology. However, the flipside of the quest for accuracy and speed is most evident in EEG-based visual BCI where it has led to a gamut of increasingly complex classifiers, tailored to the needs of specific stimulation paradigms and use contexts. In this contribution, we argue that spatiotemporal beamforming can serve several synchronous visual BCI paradigms. We demonstrate this for three popular visual paradigms even without attempting to optimizing their electrode sets. For each selectable target, a spatiotemporal beamformer is applied to assess whether the corresponding signal-of-interest is present in the preprocessed multichannel EEG signals. The target with the highest beamformer output is then selected by the decoder (maximum selection). In addition to this simple selection rule, we also investigated whether interactions between beamformer outputs could be employed to increase accuracy by combining the outputs for all targets into a feature vector and applying three common classification algorithms. The results show that the accuracy of spatiotemporal beamforming with maximum selection is at par with that of the classification algorithms and interactions between beamformer outputs do not further improve that accuracy.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Simple predictions of maximum transport rate in unsaturated soil and rock
Nimmo, John R.
2007-01-01
In contrast with the extreme variability expected for water and contaminant fluxes in the unsaturated zone, evidence from 64 field tests of preferential flow indicates that the maximum transport speed Vmax, adjusted for episodicity of infiltration, deviates little from a geometric mean of 13 m/d. A model based on constant‐speed travel during infiltration pulses of actual or estimated duration can predict Vmax with approximate order‐of‐magnitude accuracy, irrespective of medium or travel distance, thereby facilitating such problems as the prediction of worst‐case contaminant traveltimes. The lesser variability suggests that preferential flow is subject to rate‐limiting mechanisms analogous to those that impose a terminal velocity on objects in free fall and to rate‐compensating mechanisms analogous to Le Chatlier's principle. A critical feature allowing such mechanisms to dominate may be the presence of interfacial boundaries confined by neither solid material nor capillary forces.
Illusory expectations can affect retrieval-monitoring accuracy.
McDonough, Ian M; Gallo, David A
2012-03-01
The present study investigated how expectations, even when illusory, can affect the accuracy of memory decisions. Participants studied words presented in large or small font for subsequent memory tests. Replicating prior work, judgments of learning indicated that participants expected to remember large words better than small words, even though memory for these words was equivalent on a standard test of recognition memory and subjective judgments. Critically, we also included tests that instructed participants to selectively search memory for either large or small words, thereby allowing different memorial expectations to contribute to performance. On these tests we found reduced false recognition when searching memory for large words relative to small words, such that the size illusion paradoxically affected accuracy measures (d' scores) in the absence of actual memory differences. Additional evidence for the role of illusory expectations was that (a) the accuracy effect was obtained only when participants searched memory for the aspect of the stimuli corresponding to illusory expectations (size instead of color) and (b) the accuracy effect was eliminated on a forced-choice test that prevented the influence of memorial expectations. These findings demonstrate the critical role of memorial expectations in the retrieval-monitoring process. 2012 APA, all rights reserved
Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.
1992-01-01
Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Q; Driewer, J; Wang, S
Purpose The accuracy of Varian PerfectPitch six degree of freedom (DOF) robotic couch was examined using Varian Isocal phantom and cone-beam CT (CBCT) system. Methods CBCT images of the Isocal phantom were taken at different pitch and roll angles. The pitch and roll angles were varied from 357 to 3 degrees in one degree increments by input from service console, generating a total of 49 combinations with couch angle (yaw) zero. The center positions of the 16 tungsten carbide BBs contained in the Isocal were determined with in-house image processing software. Expected BBs positions at different rotation angles were determinedmore » mathematically by applying a combined translation/rotation operator to BB positions at zero pitch and roll values. A least square method was used to minimize the difference between the expected BB positions and their measured positions. In this way rotation angles were obtained and compared with input values from the console. Results A total of 49 CBCT images with voxel sizes 0.51 mm × 0.51 mm × 1 mm were used in analysis. Among the 49 calculations, the maximum rotation angle differences were 0.1 degree, 0.15 degree, and 0.09 degree, for pitch, roll, and couch rotation, respectively. The mean ± standard-deviation angle differences were 0.028±0.001 degree, −0.043±0.003 degree, and −0.009±0.001 degree, for pitch, roll, and couch rotation, respectively. The maximum isocenter shifts were 0.3 mm, 0.5 mm, 0.4 mm in x, y, z direction respectively following IEC6127 convention. The mean isocenter shifts were 0.07±0.02 mm, −0.05±0.06 mm, and −0.12±0.02 mm in x, y and z directions. Conclusion The accuracy of the Varian PerfectPitch six DOF couch was studied with CBCTs of the Isocal phantom. The rotational errors were less than 0.15 degree and isocenter shifts were less than 0.5 mm in any direction. This accuracy is sufficient for stereotactic radiotherapy clinical applications.« less
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
Design of testbed and emulation tools
NASA Technical Reports Server (NTRS)
Lundstrom, S. F.; Flynn, M. J.
1986-01-01
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.
Apollo 16, LM-11 descent propulsion system final flight evaluation
NASA Technical Reports Server (NTRS)
Avvenire, A. T.
1974-01-01
The performance of the LM-11 descent propulsion system during the Apollo 16 missions was evaluated and found satisfactory. The average engine effective specific impulse was 0.1 second higher than predicted, but well within the predicted one sigma uncertainty of 0.2 seconds. Several flight measurement discrepancies existed during the flight as follows: (1) the chamber pressure transducer had a noticeable drift, exhibiting a maximum error of about 1.5 psi at approximately 130 seconds after engine ignition, (2) the fuel and oxidizer interface pressure measurements appeared to be low during the entire flight, and (3) the fuel propellant quantity gaging system did not perform within expected accuracies.
Application of Bayesian Networks to hindcast barrier island morphodynamics
Wilson, Kathleen E.; Adams, Peter N.; Hapke, Cheryl J.; Lentz, Erika E.; Brenner, Owen T.
2015-01-01
We refine a preliminary Bayesian Network by 1) increasing model experience through additional observations, 2) including anthropogenic modification history, and 3) replacing parameterized wave impact values with maximum run-up elevation. Further, we develop and train a pair of generalized models with an additional dataset encompassing a different storm event, which expands the observations beyond our hindcast objective. We compare the skill of the generalized models against the Nor'Ida specific model formulation, balancing the reduced skill with an expectation of increased transferability. Results of Nor'Ida hindcasts ranged in skill from 0.37 to 0.51 and accuracy of 65.0 to 81.9%.
Retrieval-travel-time model for free-fall-flow-rack automated storage and retrieval system
NASA Astrophysics Data System (ADS)
Metahri, Dhiyaeddine; Hachemi, Khalid
2018-03-01
Automated storage and retrieval systems (AS/RSs) are material handling systems that are frequently used in manufacturing and distribution centers. The modelling of the retrieval-travel time of an AS/RS (expected product delivery time) is practically important, because it allows us to evaluate and improve the system throughput. The free-fall-flow-rack AS/RS has emerged as a new technology for drug distribution. This system is a new variation of flow-rack AS/RS that uses an operator or a single machine for storage operations, and uses a combination between the free-fall movement and a transport conveyor for retrieval operations. The main contribution of this paper is to develop an analytical model of the expected retrieval-travel time for the free-fall flow-rack under a dedicated storage assignment policy. The proposed model, which is based on a continuous approach, is compared for accuracy, via simulation, with discrete model. The obtained results show that the maximum deviation between the continuous model and the simulation is less than 5%, which shows the accuracy of our model to estimate the retrieval time. The analytical model is useful to optimise the dimensions of the rack, assess the system throughput, and evaluate different storage policies.
Subjective Life Expectancy Among College Students.
Rodemann, Alyssa E; Arigo, Danielle
2017-09-14
Establishing healthy habits in college is important for long-term health. Despite existing health promotion efforts, many college students fail to meet recommendations for behaviors such as healthy eating and exercise, which may be due to low perceived risk for health problems. The goals of this study were to examine: (1) the accuracy of life expectancy predictions, (2) potential individual differences in accuracy (i.e., gender and conscientiousness), and (3) potential change in accuracy after inducing awareness of current health behaviors. College students from a small northeastern university completed an electronic survey, including demographics, initial predictions of their life expectancy, and their recent health behaviors. At the end of the survey, participants were asked to predict their life expectancy a second time. Their health data were then submitted to a validated online algorithm to generate calculated life expectancy. Participants significantly overestimated their initial life expectancy, and neither gender nor conscientiousness was related to the accuracy of these predictions. Further, subjective life expectancy decreased from initial to final predictions. These findings suggest that life expectancy perceptions present a unique-and potentially modifiable-psychological process that could influence college students' self-care.
Assessment of macroseismic intensity in the Nile basin, Egypt
NASA Astrophysics Data System (ADS)
Fergany, Elsayed
2018-01-01
This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.
Test expectancy affects metacomprehension accuracy.
Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D
2011-06-01
Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.
Coral Bleaching Products - Office of Satellite and Product Operations
weeks. One DHW is equivalent to one week of sea surface temperatures one degree Celsius greater than the expected summertime maximum. Two DHWs are equivalent to two weeks at one degree above the expected summertime maximum OR one week of two degrees above the expected summertime maximum. Also called Coral Reef
NASA Astrophysics Data System (ADS)
Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie
2017-10-01
Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Barberopoulou, A.; Miller, K. M.; Goltz, J. D.; Synolakis, C. E.
2008-12-01
A consortium of tsunami hydrodynamic modelers, geologic hazard mapping specialists, and emergency planning managers is producing maximum tsunami inundation maps for California, covering most residential and transient populated areas along the state's coastline. The new tsunami inundation maps will be an upgrade from the existing maps for the state, improving on the resolution, accuracy, and coverage of the maximum anticipated tsunami inundation line. Thirty-five separate map areas covering nearly one-half of California's coastline were selected for tsunami modeling using the MOST (Method of Splitting Tsunami) model. From preliminary evaluations of nearly fifty local and distant tsunami source scenarios, those with the maximum expected hazard for a particular area were input to MOST. The MOST model was run with a near-shore bathymetric grid resolution varying from three arc-seconds (90m) to one arc-second (30m), depending on availability. Maximum tsunami "flow depth" and inundation layers were created by combining all modeled scenarios for each area. A method was developed to better define the location of the maximum inland penetration line using higher resolution digital onshore topographic data from interferometric radar sources. The final inundation line for each map area was validated using a combination of digital stereo photography and fieldwork. Further verification of the final inundation line will include ongoing evaluation of tsunami sources (seismic and submarine landslide) as well as comparison to the location of recorded paleotsunami deposits. Local governmental agencies can use these new maximum tsunami inundation lines to assist in the development of their evacuation routes and emergency response plans.
Image construction from the IRAS survey and data fusion
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. R.
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Lee-Carter state space modeling: Application to the Malaysia mortality data
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-06-01
This article presents an approach that formalizes the Lee-Carter (LC) model as a state space model. Maximum likelihood through Expectation-Maximum (EM) algorithm was used to estimate the model. The methodology is applied to Malaysia's total population mortality data. Malaysia's mortality data was modeled based on age specific death rates (ASDR) data from 1971-2009. The fitted ASDR are compared to the actual observed values. However, results from the comparison of the fitted and actual values between LC-SS model and the original LC model shows that the fitted values from the LC-SS model and original LC model are quite close. In addition, there is not much difference between the value of root mean squared error (RMSE) and Akaike information criteria (AIC) from both models. The LC-SS model estimated for this study can be extended for forecasting ASDR in Malaysia. Then, accuracy of the LC-SS compared to the original LC can be further examined by verifying the forecasting power using out-of-sample comparison.
Design and Implementation of an Intrinsically Safe Liquid-Level Sensor Using Coaxial Cable
Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu
2015-01-01
Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms. PMID:26029949
Design and implementation of an intrinsically safe liquid-level sensor using coaxial cable.
Jin, Baoquan; Liu, Xin; Bai, Qing; Wang, Dong; Wang, Yu
2015-05-28
Real-time detection of liquid level in complex environments has always been a knotty issue. In this paper, an intrinsically safe liquid-level sensor system for flammable and explosive environments is designed and implemented. The poly vinyl chloride (PVC) coaxial cable is chosen as the sensing element and the measuring mechanism is analyzed. Then, the capacitance-to-voltage conversion circuit is designed and the expected output signal is achieved by adopting parameter optimization. Furthermore, the experimental platform of the liquid-level sensor system is constructed, which involves the entire process of measuring, converting, filtering, processing, visualizing and communicating. Additionally, the system is designed with characteristics of intrinsic safety by limiting the energy of the circuit to avoid or restrain the thermal effects and sparks. Finally, the approach of the piecewise linearization is adopted in order to improve the measuring accuracy by matching the appropriate calibration points. The test results demonstrate that over the measurement range of 1.0 m, the maximum nonlinearity error is 0.8% full-scale span (FSS), the maximum repeatability error is 0.5% FSS, and the maximum hysteresis error is reduced from 0.7% FSS to 0.5% FSS by applying software compensation algorithms.
NASA Technical Reports Server (NTRS)
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
Simeon, Katherine M.; Bicknell, Klinton; Grieco-Calub, Tina M.
2018-01-01
Individuals use semantic expectancy – applying conceptual and linguistic knowledge to speech input – to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ±SD) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception. PMID:29472883
Simeon, Katherine M; Bicknell, Klinton; Grieco-Calub, Tina M
2018-01-01
Individuals use semantic expectancy - applying conceptual and linguistic knowledge to speech input - to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ± SD ) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception.
NASA Technical Reports Server (NTRS)
Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.
2006-01-01
Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.
Computing the Partition Function for Kinetically Trapped RNA Secondary Structures
Lorenz, William A.; Clote, Peter
2011-01-01
An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972
Henderson, Richard; Chen, Shaoxia; Chen, James Z.; Grigorieff, Nikolaus; Passmore, Lori A.; Ciccarelli, Luciano; Rubinstein, John L.; Crowther, R. Anthony; Stewart, Phoebe L.; Rosenthal, Peter B.
2011-01-01
The comparison of a pair of electron microscope images recorded at different specimen tilt angles provides a powerful approach for evaluating the quality of images, image-processing procedures, or three-dimensional structures. Here, we analyze tilt-pair images recorded from a range of specimens with different symmetries and molecular masses and show how the analysis can produce valuable information not easily obtained otherwise. We show that the accuracy of orientation determination of individual single particles depends on molecular mass, as expected theoretically since the information in each particle image increases with molecular mass. The angular uncertainty is less than 1° for particles of high molecular mass (∼ 50 MDa), several degrees for particles in the range 1–5 MDa, and tens of degrees for particles below 1 MDa. Orientational uncertainty may be the major contributor to the effective temperature factor (B-factor) describing contrast loss and therefore the maximum resolution of a structure determination. We also made two unexpected observations. Single particles that are known to be flexible showed a wider spread in orientation accuracy, and the orientations of the largest particles examined changed by several degrees during typical low-dose exposures. Smaller particles presumably also reorient during the exposure; hence, specimen movement is a second major factor that limits resolution. Tilt pairs thus enable assessment of orientation accuracy, map quality, specimen motion, and conformational heterogeneity. A convincing tilt-pair parameter plot, where 60% of the particles show a single cluster around the expected tilt axis and tilt angle, provides confidence in a structure determined using electron cryomicroscopy. PMID:21939668
PubChem3D: conformer ensemble accuracy
2013-01-01
Background PubChem is a free and publicly available resource containing substance descriptions and their associated biological activity information. PubChem3D is an extension to PubChem containing computationally-derived three-dimensional (3-D) structures of small molecules. All the tools and services that are a part of PubChem3D rely upon the quality of the 3-D conformer models. Construction of the conformer models currently available in PubChem3D involves a clustering stage to sample the conformational space spanned by the molecule. While this stage allows one to downsize the conformer models to more manageable size, it may result in a loss of the ability to reproduce experimentally determined “bioactive” conformations, for example, found for PDB ligands. This study examines the extent of this accuracy loss and considers its effect on the 3-D similarity analysis of molecules. Results The conformer models consisting of up to 100,000 conformers per compound were generated for 47,123 small molecules whose structures were experimentally determined, and the conformers in each conformer model were clustered to reduce the size of the conformer model to a maximum of 500 conformers per molecule. The accuracy of the conformer models before and after clustering was evaluated using five different measures: root-mean-square distance (RMSD), shape-optimized shape-Tanimoto (STST-opt) and combo-Tanimoto (ComboTST-opt), and color-optimized color-Tanimoto (CTCT-opt) and combo-Tanimoto (ComboTCT-opt). On average, the effect of clustering decreased the conformer model accuracy, increasing the conformer ensemble’s RMSD to the bioactive conformer (by 0.18 ± 0.12 Å), and decreasing the STST-opt, ComboTST-opt, CTCT-opt, and ComboTCT-opt scores (by 0.04 ± 0.03, 0.16 ± 0.09, 0.09 ± 0.05, and 0.15 ± 0.09, respectively). Conclusion This study shows the RMSD accuracy performance of the PubChem3D conformer models is operating as designed. In addition, the effect of PubChem3D sampling on 3-D similarity measures shows that there is a linear degradation of average accuracy with respect to molecular size and flexibility. Generally speaking, one can likely expect the worst-case minimum accuracy of 90% or more of the PubChem3D ensembles to be 0.75, 1.09, 0.43, and 1.13, in terms of STST-opt, ComboTST-opt, CTCT-opt, and ComboTCT-opt, respectively. This expected accuracy improves linearly as the molecule becomes smaller or less flexible. PMID:23289532
NASA Astrophysics Data System (ADS)
Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.
2013-09-01
In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Effect of black point on accuracy of LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; He, Nannan; Ye, Yushan
2018-03-01
Black point is the point at which RGB's single channel digital drive value is 0. Due to the problem of light leakage of liquid-crystal displays (LCDs), black point's luminance value is not 0, this phenomenon bring some errors to colorimetric characterization of LCDs, especially low luminance value driving greater sampling effect. This paper describes the characteristic accuracy of polynomial model method and the effect of black point on accuracy, the color difference accuracy is given. When considering the black point in the characteristics equation, the maximum color difference is 3.246, the maximum color difference than without considering the black points reduced by 2.36. The experimental results show that the accuracy of LCDs colorimetric characterization can be improved, if the effect of black point is eliminated properly.
NASA Astrophysics Data System (ADS)
Manns, Fabrice; Rol, Pascal O.; Parel, Jean-Marie A.; Schmid, Armin; Shen, Jin-Hui; Matsui, Takaaki; Soederberg, Per G.
1996-05-01
The smoothness and accuracy of PMMA ablations with a prototype scanning photorefractive keratectomy (SPRK) system were evaluated by optical profilometry. A prototype frequency- quintupled Nd:YAG laser (Laser Harmonic, LaserSight, Orlando, FL) was used (wavelength: 213 nm, pulse duration: 15 ns, repetition rate: 10 Hz). The laser energy was delivered through two computer-controlled galvanometer scanners that were controlled with our own hardware and software. The system was programmed to create on a block of PMMA the ablations corresponding to the correction of 6 diopters of myopia with 60%, 70%, and 80% spot overlap. The energy was 1.25 mJ. After ablation, the topography of the samples was measured with an optical profilometer (UBM Messtechnik, Ettlingen, Germany). The ablation depth was 10 to 15 micrometer larger than expected. The surfaces created with 50% to 70% overlap exhibited large saw-tooth like variations, with a maximum peak to peak variation of approximately 20 micrometer. With 80% overlap, the rms roughness was 1.3 micrometer and the central flattening was 7 diopters. This study shows that scanning PRK can produce smooth and accurate ablations.
Metabolic optimisation of the basketball free throw.
Padulo, Johnny; Attene, Giuseppe; Migliaccio, Gian Mario; Cuzzolin, Francesco; Vando, Stefano; Ardigò, Luca Paolo
2015-01-01
The free throw (FT) is a fundamental basketball skill used frequently during a match. Most of actual play occurs at about 85% of maximum heart rate (HR). Metabolic intensity, through fatigue, may influence a technically skilled move as the FT is. Twenty-eight under 17 basketball players were studied while shooting FTs on a regular indoor basketball court. We investigated FT accuracy in young male basketball players shooting at three different HRs: at rest, at 50% and at 80% of maximum experimentally obtained HR value. We found no significant FT percentage difference between rest and 50% of the maximum HR (FT percentage about 80%; P > 0.05). Differently, at 80% of the maximum HR the FT percentage decreased significantly by more than 20% (P < 0.001) down to about 60%. No preliminary warm-up is needed before entering game for the FT accuracy. Furthermore, we speculate that time-consuming, cooling-off routines usually performed by shooters before each FT may be functional to improve its accuracy.
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
NASA Astrophysics Data System (ADS)
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
DEM interpolation weight calculation modulus based on maximum entropy
NASA Astrophysics Data System (ADS)
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
Kiel, Elizabeth J; Buss, Kristin A
2010-07-03
OBJECTIVE: Although maternal internalizing symptoms and parenting dimensions have been linked to reports and perceptions of children's behavior, it remains relatively unknown whether these characteristics relate to expectations or the accuracy of expectations for toddlers' responses to novel situations. DESIGN: A community sample of 117 mother-toddler dyads participated in a laboratory visit and questionnaire completion. At the laboratory, mothers were interviewed about their expectations for their toddlers' behaviors in a variety of novel tasks; toddlers then participated in these activities, and trained coders scored their behaviors. Mothers completed questionnaires assessing demographics, depressive and worry symptoms, and parenting dimensions. RESULTS: Mothers who reported more worry expected their toddlers to display more fearful behavior during the laboratory tasks, but worry did not moderate how accurately maternal expectations predicted toddlers' observed behavior. When also reporting a low level of authoritative-responsive parenting, maternal depressive symptoms moderated the association between maternal expectations and observed toddler behavior, such that, as depressive symptoms increased, maternal expectations related less strongly to toddler behavior. CONCLUSIONS: When mothers were asked about their expectations for their toddlers' behavior in the same novel situations from which experimenters observe this behavior, symptoms and parenting had minimal effect on the accuracy of mothers' expectations. When in the context of low authoritative-responsive parenting, however, depressive symptoms related to less accurate predictions of their toddlers' fearful behavior.
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
The Effect of Pixel Size on the Accuracy of Orthophoto Production
NASA Astrophysics Data System (ADS)
Kulur, S.; Yildiz, F.; Selcuk, O.; Yildiz, M. A.
2016-06-01
In our country, orthophoto products are used by the public and private sectors for engineering services and infrastructure projects, Orthophotos are particularly preferred due to faster and are more economical production according to vector digital photogrammetric production. Today, digital orthophotos provide an expected accuracy for engineering and infrastructure projects. In this study, the accuracy of orthophotos using pixel sizes with different sampling intervals are tested for the expectations of engineering and infrastructure projects.
PREMIX: PRivacy-preserving EstiMation of Individual admiXture.
Chen, Feng; Dow, Michelle; Ding, Sijie; Lu, Yao; Jiang, Xiaoqian; Tang, Hua; Wang, Shuang
2016-01-01
In this paper we proposed a framework: PRivacy-preserving EstiMation of Individual admiXture (PREMIX) using Intel software guard extensions (SGX). SGX is a suite of software and hardware architectures to enable efficient and secure computation over confidential data. PREMIX enables multiple sites to securely collaborate on estimating individual admixture within a secure enclave inside Intel SGX. We implemented a feature selection module to identify most discriminative Single Nucleotide Polymorphism (SNP) based on informativeness and an Expectation Maximization (EM)-based Maximum Likelihood estimator to identify the individual admixture. Experimental results based on both simulation and 1000 genome data demonstrated the efficiency and accuracy of the proposed framework. PREMIX ensures a high level of security as all operations on sensitive genomic data are conducted within a secure enclave using SGX.
Kiel, Elizabeth J.; Buss, Kristin A.
2010-01-01
SYNOPSIS Objective Although maternal internalizing symptoms and parenting dimensions have been linked to reports and perceptions of children’s behavior, it remains relatively unknown whether these characteristics relate to expectations or the accuracy of expectations for toddlers’ responses to novel situations. Design A community sample of 117 mother-toddler dyads participated in a laboratory visit and questionnaire completion. At the laboratory, mothers were interviewed about their expectations for their toddlers’ behaviors in a variety of novel tasks; toddlers then participated in these activities, and trained coders scored their behaviors. Mothers completed questionnaires assessing demographics, depressive and worry symptoms, and parenting dimensions. Results Mothers who reported more worry expected their toddlers to display more fearful behavior during the laboratory tasks, but worry did not moderate how accurately maternal expectations predicted toddlers’ observed behavior. When also reporting a low level of authoritative-responsive parenting, maternal depressive symptoms moderated the association between maternal expectations and observed toddler behavior, such that, as depressive symptoms increased, maternal expectations related less strongly to toddler behavior. Conclusions When mothers were asked about their expectations for their toddlers’ behavior in the same novel situations from which experimenters observe this behavior, symptoms and parenting had minimal effect on the accuracy of mothers’ expectations. When in the context of low authoritative-responsive parenting, however, depressive symptoms related to less accurate predictions of their toddlers’ fearful behavior. PMID:21037974
The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy
ERIC Educational Resources Information Center
Wyse, Adam E.
2011-01-01
In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…
Corn and soybean Landsat MSS classification performance as a function of scene characteristics
NASA Technical Reports Server (NTRS)
Batista, G. T.; Hixson, M. M.; Bauer, M. E.
1982-01-01
In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005–0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level. PMID:24498162
Da, Yang; Wang, Chunkao; Wang, Shengwen; Hu, Guo
2014-01-01
We established a genomic model of quantitative trait with genomic additive and dominance relationships that parallels the traditional quantitative genetics model, which partitions a genotypic value as breeding value plus dominance deviation and calculates additive and dominance relationships using pedigree information. Based on this genomic model, two sets of computationally complementary but mathematically identical mixed model methods were developed for genomic best linear unbiased prediction (GBLUP) and genomic restricted maximum likelihood estimation (GREML) of additive and dominance effects using SNP markers. These two sets are referred to as the CE and QM sets, where the CE set was designed for large numbers of markers and the QM set was designed for large numbers of individuals. GBLUP and associated accuracy formulations for individuals in training and validation data sets were derived for breeding values, dominance deviations and genotypic values. Simulation study showed that GREML and GBLUP generally were able to capture small additive and dominance effects that each accounted for 0.00005-0.0003 of the phenotypic variance and GREML was able to differentiate true additive and dominance heritability levels. GBLUP of the total genetic value as the summation of additive and dominance effects had higher prediction accuracy than either additive or dominance GBLUP, causal variants had the highest accuracy of GREML and GBLUP, and predicted accuracies were in agreement with observed accuracies. Genomic additive and dominance relationship matrices using SNP markers were consistent with theoretical expectations. The GREML and GBLUP methods can be an effective tool for assessing the type and magnitude of genetic effects affecting a phenotype and for predicting the total genetic value at the whole genome level.
Walker, Luke; Chinnaiyan, Prakash; Forster, Kenneth
2008-01-01
We conducted a comprehensive evaluation of the clinical accuracy of an image‐guided frameless intracranial radiosurgery system. All links in the process chain were tested. Using healthy volunteers, we evaluated a novel method to prospectively quantify the range of target motion for optimal determination of the planning target volume (PTV) margin. The overall system isocentric accuracy was tested using a rigid anthropomorphic phantom containing a hidden target. Intrafraction motion was simulated in 5 healthy volunteers. Reinforced head‐and‐shoulders thermoplastic masks were used for immobilization. The subjects were placed in a treatment position for 15 minutes (the maximum expected time between repeated isocenter localizations) and the six‐degrees‐of‐freedom target displacements were recorded with high frequency by tracking infrared markers. The markers were placed on a customized piece of thermoplastic secured to the head independently of the immobilization mask. Additional data were collected with the subjects holding their breath, talking, and deliberately moving. As compared with fiducial matching, the automatic registration algorithm did not introduce clinically significant errors (<0.3 mm difference). The hidden target test confirmed overall system isocentric accuracy of ≤1 mm (total three‐dimensional displacement). The subjects exhibited various patterns and ranges of head motion during the mock treatment. The total displacement vector encompassing 95% of the positional points varied from 0.4 mm to 2.9 mm. Pre‐planning motion simulation with optical tracking was tested on volunteers and appears promising for determination of patient‐specific PTV margins. Further patient study is necessary and is planned. In the meantime, system accuracy is sufficient for confident clinical use with 3 mm PTV margins. PACS number: 87.53.Ly
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Mathisen, R W; Mazess, R B
1981-02-01
The authors present a revised method for calculating life expectancy tables for populations where individual ages at death are known or can be estimated. The conventional and revised methods are compared using data for U.S. and Hungarian males in an attempt to determine the accuracy of each method in calculating life expectancy at advanced ages. Means of correcting errors caused by age rounding, age exaggeration, and infant mortality are presented
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Weather forecasting based on hybrid neural model
NASA Astrophysics Data System (ADS)
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
Chen, Zheng; Huang, Hongying; Yan, Jue
2015-12-21
We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less
All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.
Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne
2015-04-28
Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Accuracy Of LTPP Traffic Loading Estimates
DOT National Transportation Integrated Search
1998-07-01
The accuracy and reliability of traffic load estimates are key to determining a pavement's life expectancy. To better understand the variability of traffic loading rates and its effect on the accuracy of the Long Term Pavement Performance (LTPP) prog...
Time-resolved speckle effects on the estimation of laser-pulse arrival times
NASA Technical Reports Server (NTRS)
Tsai, B.-M.; Gardner, C. S.
1985-01-01
A maximum-likelihood (ML) estimator of the pulse arrival in laser ranging and altimetry is derived for the case of a pulse distorted by shot noise and time-resolved speckle. The performance of the estimator is evaluated for pulse reflections from flat diffuse targets and compared with the performance of a suboptimal centroid estimator and a suboptimal Bar-David ML estimator derived under the assumption of no speckle. In the large-signal limit the accuracy of the estimator was found to improve as the width of the receiver observational interval increases. The timing performance of the estimator is expected to be highly sensitive to background noise when the received pulse energy is high and the receiver observational interval is large. Finally, in the speckle-limited regime the ML estimator performs considerably better than the suboptimal estimators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paling, S.; Hillas, A.M.; Berley, D.
1997-07-01
An array of six wide angle Cerenkov detectors was constructed amongst the scintillator and muon detectors of the CYGNUS II array at Los Alamos National Laboratory to investigate cosmic ray composition in the PeV region through measurements of the shape of Cerenkov lateral distributions. Data were collected during clear, moonless nights over three observing periods in 1995. Estimates of depths of shower maxima determined from the recorded Cerenkov lateral distributions align well with existing results at higher energies and suggest a mixed to heavy composition in the PeV region with no significant variation observed around the knee. The accuracy ofmore » composition determination is limited by uncertainties in the expected levels of depth of maximum predicted using different Monte-Carlo shower simulation models.« less
Maximum angular accuracy of pulsed laser radar in photocounting limit.
Elbaum, M; Diament, P; King, M; Edelson, W
1977-07-01
To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Huang, Hongying; Yan, Jue
We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Molina, Sergio L; Stodden, David F
2018-04-01
This study examined variability in throwing speed and spatial error to test the prediction of an inverted-U function (i.e., impulse-variability [IV] theory) and the speed-accuracy trade-off. Forty-five 9- to 11-year-old children were instructed to throw at a specified percentage of maximum speed (45%, 65%, 85%, and 100%) and hit the wall target. Results indicated no statistically significant differences in variable error across the target conditions (p = .72), failing to support the inverted-U hypothesis. Spatial accuracy results indicated no statistically significant differences with mean radial error (p = .18), centroid radial error (p = .13), and bivariate variable error (p = .08) also failing to support the speed-accuracy trade-off in overarm throwing. As neither throwing performance variability nor accuracy changed across percentages of maximum speed in this sample of children as well as in a previous adult sample, current policy and practices of practitioners may need to be reevaluated.
Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Auger, Isabelle; Leone, Mario
2015-01-01
Individual heart rate (HR) to workload relationships were determined using 93 submaximal step-tests administered to 26 healthy participants attending physical activities in a university training centre (laboratory study) and 41 experienced forest workers (field study). Predicted maximum aerobic capacity (MAC) was compared to measured MAC from a maximal treadmill test (laboratory study) to test the effect of two age-predicted maximum HR Equations (220-age and 207-0.7 × age) and two clothing insulation levels (0.4 and 0.91 clo) during the step-test. Work metabolism (WM) estimated from forest work HR was compared against concurrent work V̇O2 measurements while taking into account the HR thermal component. Results show that MAC and WM can be accurately predicted from work HR measurements and simple regression models developed in this study (1% group mean prediction bias and up to 25% expected prediction bias for a single individual). Clothing insulation had no impact on predicted MAC nor age-predicted maximum HR equations. Practitioner summary: This study sheds light on four practical methodological issues faced by practitioners regarding the use of HR methodology to assess WM in actual work environments. More specifically, the effect of wearing work clothes and the use of two different maximum HR prediction equations on the ability of a submaximal step-test to assess MAC are examined, as well as the accuracy of using an individual's step-test HR to workload relationship to predict WM from HR data collected during actual work in the presence of thermal stress.
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi
2017-01-01
Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996
Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods
NASA Astrophysics Data System (ADS)
Koreň, Milan; Mokroš, Martin; Bucha, Tomáš
2017-12-01
This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.
NASA Technical Reports Server (NTRS)
Curtis, Steve
1999-01-01
Building upon the numerous successes of the pre-solar maximum International Solar Terrestrial Physics (ISTP) mission, the ISTP Solar Maximum Mission is expected to produce new insights into global flow of energy, momentum, and mass, from the Sun, through the heliosphere, into the magnetosphere and to their final deposition in the terrestrial upper atmosphere/ionosphere system. Of particular interest is the determination of the geo-effectiveness of solar events, principally Coronal Mass Ejections (CMEs). Given the expected increased frequency and strength of CMEs during the Solar Maximum period, a major advance in our understanding of nature of the coupling of CMEs to the magnetosphere-ionosphere-atmosphere system is expected. The roles during this time of the various ISTP assets will be discussed. These assets will include the SOHO, Wind, Polar, and Geotail spacecraft, the ground-based observing networks and the theory tools.
Goozée, Justine V; Murdoch, Bruce E; Theodoros, Deborah G
2002-01-01
A miniature pressure transducer was used to assess the interlabial contact pressures produced by a group of 19 adults (mean age 30.6 years) with dysarthria following severe traumatic brain injury (TBI) during a set of speech and nonspeech tasks. Ten parameters relating to lip strength, endurance, rate of movement and lip pressure accuracy and stability were measured from the nonspeech tasks. The results attained by the TBI group were compared against a group of 19 age- and sex-matched control subjects. Significant differences between the groups were found for maximum interlabial contact pressure, maximum rate of repetition of maximum pressure, and lip pressure accuracy at 50 and 10% levels of maximum pressure. In regards to speech, the interlabial contact pressures generated by the TBI group and control group did not differ significantly. When expressed as percentages of maximum pressure, however, the TBI group's interlabial pressures appeared to have been generated with greater physiological effort. Copyright 2002 S. Karger AG, Basel
Examining Impulse-Variability in Kicking.
Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F
2016-07-01
This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.
Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.
Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu
2017-09-01
Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stallard, Eric; Kinosian, Bruce; Stern, Yaakov
2017-09-20
Alzheimer's disease (AD) progression varies substantially among patients, hindering calculation of residual total life expectancy (TLE) and its decomposition into disability-free life expectancy (DFLE) and disabled life expectancy (DLE) for individual patients with AD. The objective of the present study was to assess the accuracy of a new synthesis of Sullivan's life table (SLT) and longitudinal Grade of Membership (L-GoM) models that estimates individualized TLEs, DFLEs, and DLEs for patients with AD. If sufficiently accurate, such information could enhance the quality of important decisions in AD treatment and patient care. We estimated a new SLT/L-GoM model of the natural history of AD over 10 years in the Predictors 2 Study cohort: N = 229 with 6 fixed and 73 time-varying covariates over 21 examinations covering 11 measurement domains including cognitive, functional, behavioral, psychiatric, and other symptoms/signs. Total remaining life expectancy was censored at 10 years. Disability was defined as need for full-time care (FTC), the outcome most strongly associated with AD progression. All parameters were estimated via weighted maximum likelihood using data-dependent weights designed to ensure that the estimates of the prognostic subtypes were of high quality. Goodness of fit was tested/confirmed for survival and FTC disability for five relatively homogeneous subgroups defined to cover the range of patient outcomes over the 21 examinations. The substantial heterogeneity in initial patient presentation and AD progression was captured using three clinically meaningful prognostic subtypes and one terminal subtype exhibiting highly differentiated symptom severity on 7 of the 11 measurement domains. Comparisons of the observed and estimated survival and FTC disability probabilities demonstrated that the estimates were accurate for all five subgroups, supporting their use in AD life expectancy calculations. Mean 10-year TLE differed widely across subgroups: range 3.6-8.0 years, average 6.1 years. Mean 10-year DFLE differed relatively even more widely across subgroups: range 1.2-6.5 years, average 4.0 years. Mean 10-year DLE was relatively much closer: range 1.5-2.3 years, average 2.1 years. The SLT/L-GoM model yields accurate maximum likelihood estimates of TLE, DFLE, and DLE for patients with AD; it provides a realistic, comprehensive modeling framework for endpoint and resource use/cost calculations.
Ionic Current Measurements in the Squid Giant Axon Membrane
Cole, Kenneth S.; Moore, John W.
1960-01-01
The concepts, experiments, and interpretations of ionic current measurements after a step change of the squid axon membrane potential require the potential to be constant for the duration and the membrane area measured. An experimental approach to this ideal has been developed. Electrometer, operational, and control amplifiers produce the step potential between internal micropipette and external potential electrodes within 40 microseconds and a few millivolts. With an internal current electrode effective resistance of 2 ohm cm.2, the membrane potential and current may be constant within a few millivolts and 10 per cent out to near the electrode ends. The maximum membrane current patterns of the best axons are several times larger but of the type described by Cole and analyzed by Hodgkin and Huxley when the change of potential is adequately controlled. The occasional obvious distortions are attributed to the marginal adequacy of potential control to be expected from the characteristics of the current electrodes and the axon. Improvements are expected only to increase stability and accuracy. No reason has been found either to question the qualitative characteristics of the early measurements or to so discredit the analyses made of them. PMID:13694548
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
Wason, James M. S.; Mander, Adrian P.
2012-01-01
Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
Prototypic Development and Evaluation of a Medium Format Metric Camera
NASA Astrophysics Data System (ADS)
Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.
2018-05-01
Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
Test Expectancy Affects Metacomprehension Accuracy
ERIC Educational Resources Information Center
Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2011-01-01
Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…
The Smithsonian Earth Physics Satellite (SEPS) definition study, volumes 1 through 4
NASA Technical Reports Server (NTRS)
1971-01-01
A limited Phase B study was undertaken to determine the merit and feasibility of launching a proposed earth physics satellite with Apollo-type hardware. The study revealed that it would be feasible to launch this satellite using a S-IB stage, a S-IVB with restart capability, an instrument unit, a SLA for the satellite shroud, and a nose cone (AS-204 configuration). A definition of the proposed satellite is provided, which is specifically designed to satisfy the fundamental requirement of providing an orbiting benchmark of maximum accuracy. The satellite is a completely passive, solid 3628-kg sphere of 38.1-cm radius and very high mass-to-area ratio (7980 kg sq mi). In the suggested orbit of 55 degrees inclination, 3720 km altitude, and low eccentricity, the orbital lifetime is extremely long, so many decades of operation can be expected.
Prediction of monthly rainfall in Victoria, Australia: Clusterwise linear regression approach
NASA Astrophysics Data System (ADS)
Bagirov, Adil M.; Mahmood, Arshad; Barton, Andrew
2017-05-01
This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889-2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Bubble migration inside a liquid drop in a space laboratory
NASA Technical Reports Server (NTRS)
Annamalai, P.; Shankar, N.; Cole, R.; Subramanian, R. S.
1982-01-01
The design of experiments in materials processing for trials on board the Shuttle are described. Thermocapillary flows will be examined as an aid to mixing in the formation of glasses. Acoustically levitated molten glass spheres will be spot heated to induce surface flow away from the hot spot to induce mixing. The surface flows are also expected to cause internal convective motion which will drive entrained gas bubbles toward the hot spot, a process also enhanced by the presence of thermal gradients. The method is called fining, and will be augmented by rotation of the sphere to cause bubble migration toward the axes of rotation to form one large bubble which is more easily removed. Centering techniques to fix the maximum centering accuracy will also be tried. Ground-based studies of bubble migration in a rotating liquid and in a temperature gradient in a liquid drop are reviewed.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Numerical modeling of space-time wave extremes using WAVEWATCH III
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Y. Chao, Yung; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik
2017-04-01
A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Examining impulse-variability in overarm throwing.
Urbin, M A; Stodden, David; Boros, Rhonda; Shannon, David
2012-01-01
The purpose of this study was to examine variability in overarm throwing velocity and spatial output error at various percentages of maximum to test the prediction of an inverted-U function as predicted by impulse-variability theory and a speed-accuracy trade-off as predicted by Fitts' Law Thirty subjects (16 skilled, 14 unskilled) were instructed to throw a tennis ball at seven percentages of their maximum velocity (40-100%) in random order (9 trials per condition) at a target 30 feet away. Throwing velocity was measured with a radar gun and interpreted as an index of overall systemic power output. Within-subject throwing velocity variability was examined using within-subjects repeated-measures ANOVAs (7 repeated conditions) with built-in polynomial contrasts. Spatial error was analyzed using mixed model regression. Results indicated a quadratic fit with variability in throwing velocity increasing from 40% up to 60%, where it peaked, and then decreasing at each subsequent interval to maximum (p < .001, η2 = .555). There was no linear relationship between speed and accuracy. Overall, these data support the notion of an inverted-U function in overarm throwing velocity variability as both skilled and unskilled subjects approach maximum effort. However, these data do not support the notion of a speed-accuracy trade-off. The consistent demonstration of an inverted-U function associated with systemic power output variability indicates an enhanced capability to regulate aspects of force production and relative timing between segments as individuals approach maximum effort, even in a complex ballistic skill.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
26 CFR 1.6662-2 - Accuracy-related penalty.
Code of Federal Regulations, 2010 CFR
2010-04-01
....6662-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6662-2... attributable both to negligence and a substantial understatement of income tax, the maximum accuracy-related...
[Control of intelligent car based on electroencephalogram and neurofeedback].
Li, Song; Xiong, Xin; Fu, Yunfa
2018-02-01
To improve the performance of brain-controlled intelligent car based on motor imagery (MI), a method based on neurofeedback (NF) with electroencephalogram (EEG) for controlling intelligent car is proposed. A mental strategy of MI in which the energy column diagram of EEG features related to the mental activity is presented to subjects with visual feedback in real time to train them to quickly master the skills of MI and regulate their EEG activity, and combination of multi-features fusion of MI and multi-classifiers decision were used to control the intelligent car online. The average, maximum and minimum accuracy of identifying instructions achieved by the trained group (trained by the designed feedback system before the experiment) were 85.71%, 90.47% and 76.19%, respectively and the corresponding accuracy achieved by the control group (untrained) were 73.32%, 80.95% and 66.67%, respectively. For the trained group, the average, longest and shortest time consuming were 92 s, 101 s, and 85 s, respectively, while for the control group the corresponding time were 115.7 s, 120 s, and 110 s, respectively. According to the results described above, it is expected that this study may provide a new idea for the follow-up development of brain-controlled intelligent robot by the neurofeedback with EEG related to MI.
Status of air-shower measurements with sparse radio arrays
NASA Astrophysics Data System (ADS)
Schröder, Frank G.
2017-03-01
This proceeding gives a summary of the current status and open questions of the radio technique for cosmic-ray air showers, assuming that the reader is already familiar with the principles. It includes recent results of selected experiments not present at this conference, e.g., LOPES and TREND. Current radio arrays like AERA or Tunka-Rex have demonstrated that areas of several km2 can be instrumented for reasonable costs with antenna spacings of the order of 200m. For the energy of the primary particle such sparse antenna arrays can already compete in absolute accuracy with other precise techniques, like the detection of air-fluorescence or air-Cherenkov light. With further improvements in the antenna calibration, the radio detection might become even more accurate. For the atmospheric depth of the shower maximum, Xmax, currently only the dense array LOFAR features a precision similar to the fluorescence technique, but analysis methods for the radio measurement of Xmax are still under development. Moreover, the combination of radio and muon measurements is expected to increase the accuracy of the mass composition, and this around-the-clock recording is not limited to clear nights as are the light-detection methods. Consequently, radio antennas will be a valuable add-on for any air shower array targeting the energy range above 100 PeV.
NASA Astrophysics Data System (ADS)
Buitink, S.; Hörandel, J. R.; de Jong, S.; Lahmann, R.; Nahnhauer, R.; Scholten, O.
2017-03-01
This proceeding gives a summary of the current status and open questions of the radio technique for cosmic-ray air showers, assuming that the reader is already familiar with the principles. It includes recent results of selected experiments not present at this conference, e.g., LOPES and TREND. Current radio arrays like AERA or Tunka-Rex have demonstrated that areas of several km2 can be instrumented for reasonable costs with antenna spacings of the order of 200m. For the energy of the primary particle such sparse antenna arrays can already compete in absolute accuracy with other precise techniques, like the detection of air-fluorescence or air-Cherenkov light. With further improvements in the antenna calibration, the radio detection might become even more accurate. For the atmospheric depth of the shower maximum, Xmax, currently only the dense array LOFAR features a precision similar to the fluorescence technique, but analysis methods for the radio measurement of Xmax are still under development. Moreover, the combination of radio and muon measurements is expected to increase the accuracy of the mass composition, and this around-the-clock recording is not limited to clear nights as are the light-detection methods. Consequently, radio antennas will be a valuable add-on for any air shower array targeting the energy range above 100 PeV.
Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Seki, Shinichiro; Tsubakimoto, Maho; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro
2015-02-01
To prospectively compare the capabilities of dynamic perfusion area-detector computed tomography (CT), dynamic magnetic resonance (MR) imaging, and positron emission tomography (PET) combined with CT (PET/CT) with use of fluorine 18 fluorodeoxyglucose (FDG) for the diagnosis of solitary pulmonary nodules. The institutional review board approved this study, and written informed consent was obtained from each subject. A total of 198 consecutive patients with 218 nodules prospectively underwent dynamic perfusion area-detector CT, dynamic MR imaging, FDG PET/CT, and microbacterial and/or pathologic examinations. Nodules were classified into three groups: malignant nodules (n = 133) and benign nodules with low (n = 53) or high (n = 32) biologic activity. Total perfusion was determined with dual-input maximum slope models at area-detector CT, maximum and slope of enhancement ratio at MR imaging, and maximum standardized uptake value (SUVmax) at PET/CT. Next, all indexes for malignant and benign nodules were compared with the Tukey honest significant difference test. Then, receiver operating characteristic analysis was performed for each index. Finally, sensitivity, specificity, and accuracy were compared with the McNemar test. All indexes showed significant differences between malignant nodules and benign nodules with low biologic activity (P < .0001). The area under the receiver operating characteristic curve for total perfusion was significantly larger than that for other indexes (.0006 ≤ P ≤ .04). The specificity and accuracy of total perfusion were significantly higher than those of maximum relative enhancement ratio (specificity, P < .0001; accuracy, P < .0001), slope of enhancement ratio (specificity, P < .0001; accuracy, P < .0001), and SUVmax (specificity, P < .0001; accuracy, P < .0001). Dynamic perfusion area-detector CT is more specific and accurate than dynamic MR imaging and FDG PET/CT in the diagnosis of solitary pulmonary nodules in routine clinical practice. © RSNA, 2014.
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
NASA Technical Reports Server (NTRS)
Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.;
2016-01-01
We used one-way laser ranging data from International Laser Ranging Service (ILRS) ground stations to NASA's Lunar Reconnaissance Orbiter (LRO) for a demonstration of orbit determination. In the one-way setup, the state of LRO and the parameters of the spacecraft and all involved ground station clocks must be estimated simultaneously. This setup introduces many correlated parameters that are resolved by using a priori constraints. More over the observation data coverage and errors accumulating from the dynamical and the clock modeling limit the maximum arc length. The objective of this paper is to investigate the effect of the arc length, the dynamical and modeling accuracy and the observation data coverage on the accuracy of the results. We analyzed multiple arcs using lengths of 2 and 7 days during a one-week period in Science Mission phase 02 (SM02,November2010) and compared the trajectories, the post-fit measurement residuals and the estimated clock parameters. We further incorporated simultaneous passes from multiple stations within the observation data to investigate the expected improvement in positioning. The estimated trajectories were compared to the nominal LRO trajectory and the clock parameters (offset, rate and aging) to the results found in the literature. Arcs estimated with one-way ranging data had differences of 5-30 m compared to the nominal LRO trajectory. While the estimated LRO clock rates agreed closely with the a priori constraints, the aging parameters absorbed clock modeling errors with increasing clock arc length. Because of high correlations between the different ground station clocks and due to limited clock modeling accuracy, their differences only agreed at the order of magnitude with the literature. We found that the incorporation of simultaneous passes requires improved modeling in particular to enable the expected improvement in positioning. We found that gaps in the observation data coverage over 12h (approximately equals 6 successive LRO orbits) prevented the successful estimation of arcs with lengths shorter or longer than 2 or 7 days with our given modeling.
Predicting one repetition maximum equations accuracy in paralympic rowers with motor disabilities.
Schwingel, Paulo A; Porto, Yuri C; Dias, Marcelo C M; Moreira, Mônica M; Zoppi, Cláudio C
2009-05-01
Predicting one repetition maximum equations accuracy in paralympic rowers Resistance training intensity is prescribed using percentiles of the maximum strength, defined as the maximum tension generated for a muscle or muscular group. This value is found through the application of the one maximal repetition (1RM) test. One maximal repetition test demands time and still is not appropriate for some populations because of the risk it offers. In recent years, the prediction of maximal strength, through predicting equations, has been used to prevent the inconveniences of the 1RM test. The purpose of this study was to verify the accuracy of 12 1RM predicting equations for disabled rowers. Nine male paralympic rowers (7 one-leg amputated rowers and 2 cerebral paralyzed rowers; age, 30 +/- 7.9 years; height, 175.1 +/- 5.9 cm; weight, 69 +/- 13.6 kg) performed 1RM test for lying T-bar row and flat barbell bench press exercises to determine upper-body strength and leg press exercise to determine lower-body strength. One maximal repetition test was performed, and based on submaximal repetitions loads, several linear and exponential equations models were tested with regard of their accuracy. We did not find statistical differences for lying T-bar row and bench press exercises between measured and predicted 1RM values (p = 0.84 and 0.23 for lying T-bar row and flat barbell bench press, respectively); however, leg press exercise reached a high significant difference between measured and predicted values (p < 0.01). In conclusion, rowers with motor disabilities tolerate 1RM testing procedures, and predicting 1RM equations are accurate for bench press and lying T-bar row, but not for leg press, in this kind of athlete.
Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes
2017-01-01
Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.
Accuracy of Nonverbal Communication as Determinant of Interpersonal Expectancy Effects
ERIC Educational Resources Information Center
Zuckerman, Miron; And Others
1978-01-01
The person perception paradigm was used to address the effects of experimenters' ability to encode nonverbal cues and subjects' ability to decode nonverbal cues on magnitude of expectancy effects. Greater expectancy effects were obtained when experimenters were better encoders and subjects were better decoders of nonverbal cues. (Author)
ERIC Educational Resources Information Center
Timmermans, Anneke C.; Kuyper, Hans; Werf, Greetje
2015-01-01
Background: In several tracked educational systems, realizing optimal placements in classes in the first year of secondary education depends on the accuracy of teacher expectations. Aims: The aim of this study was to investigate between-teacher differences in their expectations regarding the academic aptitude of their students. Sample: The sample…
DOT National Transportation Integrated Search
2015-07-01
Implementing the recommendations of this study is expected to significantly : improve the accuracy of camber measurements and predictions and to : ultimately help reduce construction delays, improve bridge serviceability, : and decrease costs.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
NASA Astrophysics Data System (ADS)
Pradipto; Purqon, Acep
2017-07-01
Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.
Psychology Students' Expectations Regarding Educational Requirements and Salary for Desired Careers
ERIC Educational Resources Information Center
Strapp, Chehalis M.; Drapela, Danica J.; Henderson, Cierra I.; Nasciemento, Emily; Roscoe, Lauren J.
2018-01-01
This study investigated the accuracy of psychology majors' expectations regarding careers. Psychology majors, including 101 women and 35 men (M[subscript age] = 23 years; standard deviation[subscript age] = 6.25), indicated a desired career and estimated the level of education needed and the expected annual salary for the career. Students'…
NASA Technical Reports Server (NTRS)
Roback, V. Eric; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Bulyshev, Alexander E.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.
2015-01-01
For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, help to guide the Morpheus autonomous, rocket-propelled, free-flying lander to that safe site on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging Flash Lidar is a second generation, compact, real-time, aircooled instrument developed from a number of components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The Flash Lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision (1-s). The Flash Lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Navigation Doppler Lidar (NDL) system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The NDLâ€"TM"s measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter (LA), also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the Flash Lidar, can provide range along a separate vector. The LA measurements are also fed into the ALHAT navigation filter to provide lander guidance to the safe site. The flight tests served as the culmination of the TRL 6 journey for the ALHAT system and included launch from a pad situated at the NASA-Kennedy Space Center Shuttle Landing Facility (SLF) runway, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range just off the North end of the runway. The tests both confirmed the expected performance and also revealed several challenges present in the flight-like environment which will feed into future TRL advancement of the sensors. Guidance provided by the ALHAT system was impeded in portions of the trajectory and intermittent near the end of the trajectory due to optical effects arising from air heated by the rocket engine. The Flash Lidar identified hazards as small as 30 cm from the maximum slant range of 450 m which Morpheus could provide; however, it was occasionally susceptible to an increase in range noise due to scintillation arising from air heated by the Morpheus rocket engine which entered its Field-of-View (FOV). The Flash Lidar was also susceptible to pre-triggering, during the HRN phase, on a dust cloud created during launch and transported down-range by the wind. The NDL provided velocity and range measurements to the expected accuracy levels yet it was also susceptible to signal degradation due to air heated by the rocket engine. The LA, operating with a degraded transmitter laser, also showed signal attenuation over a few seconds at a specific phase of the flight due to the heat plume generated by the rocket engine.
NASA Technical Reports Server (NTRS)
Roback, Vincent E.; Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.; Hines, Glenn D.; Petway, Larry B.; Brewster, Paul F.; Kempton, Kevin S.; Bulyshev, Alexander E.
2015-01-01
For the first time, a suite of three lidar sensors have been used in flight to scan a lunar-like hazard field, identify a safe landing site, and, in concert with an experimental Guidance, Navigation, and Control (GN&C) system, guide the Morpheus autonomous, rocket-propelled, free-flying test bed to a safe landing on the hazard field. The lidar sensors and GN&C system are part of the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project which has been seeking to develop a system capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The 3-D imaging flash lidar is a second generation, compact, real-time, air-cooled instrument developed from a number of cutting-edge components from industry and NASA and is used as part of the ALHAT Hazard Detection System (HDS) to scan the hazard field and build a 3-D Digital Elevation Map (DEM) in near-real time for identifying safe sites. The flash lidar is capable of identifying a 30 cm hazard from a slant range of 1 km with its 8 cm range precision at 1 sigma. The flash lidar is also used in Hazard Relative Navigation (HRN) to provide position updates down to a 250m slant range to the ALHAT navigation filter as it guides Morpheus to the safe site. The Doppler Lidar system has been developed within NASA to provide velocity measurements with an accuracy of 0.2 cm/sec and range measurements with an accuracy of 17 cm both from a maximum range of 2,200 m to a minimum range of several meters above the ground. The Doppler Lidar's measurements are fed into the ALHAT navigation filter to provide lander guidance to the safe site. The Laser Altimeter, also developed within NASA, provides range measurements with an accuracy of 5 cm from a maximum operational range of 30 km down to 1 m and, being a separate sensor from the flash lidar, can provide range along a separate vector. The Laser Altimeter measurements are also fed into the ALHAT navigation filter to provide lander guidance to the safe site. The flight tests served as the culmination of the TRL 6 journey for the lidar suite and included launch from a pad situated at the NASA-Kennedy Space Center Shuttle Landing Facility (SLF) runway, a lunar-like descent trajectory from an altitude of 250m, and landing on a lunar-like hazard field of rocks, craters, hazardous slopes, and safe sites 400m down-range just off the North end of the runway. The tests both confirmed the expected performance and also revealed several challenges present in the flight-like environment which will feed into future TRL advancement of the sensors. The flash lidar identified hazards as small as 30 cm from the maximum slant range of 450 m which Morpheus could provide, however, it was occasionally susceptible to an increase in range noise due to heated air from the Morpheus rocket plume which entered its Field-of-View (FOV). The flash lidar was also susceptible to pre-triggering on dust during the HRN phase which was created during launch and transported by the wind. The Doppler Lidar provided velocity and range measurements to the expected accuracy levels yet it was also susceptible to signal degradation due to air heated by the rocket engine. The Laser Altimeter, operating with a degraded transmitter laser, also showed signal attenuation over a few seconds at a specific phase of the flight due to the heat plume generated by the rocket engine.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast estimation of diffusion tensors under Rician noise by the EM algorithm.
Liu, Jia; Gasbarra, Dario; Railavo, Juha
2016-01-15
Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S
2017-06-08
Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.
Effects of accuracy constraints on reach-to-grasp movements in cerebellar patients.
Rand, M K; Shimansky, Y; Stelmach, G E; Bracha, V; Bloedel, J R
2000-11-01
Reach-to-grasp movements of patients with pathology restricted to the cerebellum were compared with those of normal controls. Two types of paradigms with different accuracy constraints were used to examine whether cerebellar impairment disrupts the stereotypic relationship between arm transport and grip aperture and whether the variability of this relationship is altered when greater accuracy is required. The movements were made to either a vertical dowel or to a cross bar of a small cross. All subjects were asked to reach for either target at a fast but comfortable speed, grasp the object between the index finger and thumb, and lift it a short distance off the table. In terms of the relationship between arm transport and grip aperture, the control subjects showed a high consistency in grip aperture and wrist velocity profiles from trial to trial for movements to both the dowel and the cross. The relationship between the maximum velocity of the wrist and the time at which grip aperture was maximal during the reach was highly consistent throughout the experiment. In contrast, the time of maximum grip aperture and maximum wrist velocity of the cerebellar patients was quite variable from trial to trial, and the relationship of these measurements also varied considerably. These abnormalities were present regardless of the accuracy requirement. In addition, the cerebellar patients required a significantly longer time to grasp and lift the objects than the control subjects. Furthermore, the patients exhibited a greater grip aperture during reach than the controls. These data indicate that the cerebellum contributes substantially to the coordination of movements required to perform reach-to-grasp movements. Specifically, the cerebellum is critical for executing this behavior with a consistent, well-timed relationship between the transport and grasp components. This contribution is apparent even when accuracy demands are minimal.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
Driver expectancy in highway design and traffic operations
DOT National Transportation Integrated Search
1986-05-01
Expectancy relates to a driver's readiness to respond to situations, events, and information in predictable and successful ways. It influences the speed and accuracy of information handling, and affects all aspects of highway design and operations, a...
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
TH-A-9A-10: Prostate SBRT Delivery with Flattening-Filter-Free Mode: Benefit and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T; Yuan, L; Sheng, Y
Purpose: Flattening-filter-free (FFF) beam mode offered on TrueBeam™ linac enables delivering IMRT at 2400 MU/min dose rate. This study investigates the benefit and delivery accuracy of using high dose rate in the context of prostate SBRT. Methods: 8 prostate SBRT patients were retrospectively studied. In 5 cases treated with 600-MU/min dose rate, continuous prostate motion data acquired during radiation-beam-on was used to analyze motion range. In addition, the initial 1/3 of prostate motion trajectories during each radiation-beam-on was separated to simulate motion range if 2400-MU/min were used. To analyze delivery accuracy in FFF mode, MLC trajectory log files from anmore » additional 3 cases treated at 2400-MU/min were acquired. These log files record MLC expected and actual positions every 20ms, and therefore can be used to reveal delivery accuracy. Results: (1) Benefit. On average treatment at 600-MU/min takes 30s per beam; whereas 2400-MU/min requires only 11s. When shortening delivery time to ~1/3, the prostate motion range was significantly smaller (p<0.001). Largest motion reduction occurred in Sup-Inf direction, from [−3.3mm, 2.1mm] to [−1.7mm, 1.7mm], followed by reduction from [−2.1mm, 2.4mm] to [−1.0mm, 2.4mm] in Ant-Pos direction. No change observed in LR direction [−0.8mm, 0.6mm]. The combined motion amplitude (vector norm) confirms that average motion and ranges are significantly smaller when beam-on was limited to the 1st 1/3 of actual delivery time. (2) Accuracy. Trajectory log file analysis showed excellent delivery accuracy with at 2400 MU/min. Most leaf deviations during beam-on were within 0.07mm (99-percentile). Maximum leaf-opening deviations during each beam-on were all under 0.1mm for all leaves. Dose-rate was maintained at 2400-MU/min during beam-on without dipping. Conclusion: Delivery prostate SBRT with 2400 MU/min is both beneficial and accurate. High dose rates significantly reduced both treatment time and intra-beam prostate motion range. Excellent delivery accuracy was confirmed with very small leaf motion deviation.« less
Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.
2009-01-01
In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Modeling misregistration and related effects on multispectral classification
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1981-01-01
The effects of misregistration on the multispectral classification accuracy when the scene registration accuracy is relaxed from 0.3 to 0.5 pixel are investigated. Noise, class separability, spatial transient response, and field size are considered simultaneously with misregistration in their effects on accuracy. Any noise due to the scene, sensor, or to the analog/digital conversion, causes a finite fraction of the measurements to fall outside of the classification limits, even within nominally uniform fields. Misregistration causes field borders in a given band or set of bands to be closer than expected to a given pixel, causing additional pixels to be misclassified due to the mixture of materials in the pixel. Simplified first order models of the various effects are presented, and are used to estimate the performance to be expected.
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
Single snapshot DOA estimation
NASA Astrophysics Data System (ADS)
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Liang, Jinyang; Zhou, Yong; Maslov, Konstantin I; Wang, Lihong V
2013-09-01
A cross-correlation-based method is proposed to quantitatively measure transverse flow velocity using optical resolution photoacoustic (PA) microscopy enhanced with a digital micromirror device (DMD). The DMD is used to alternately deliver two spatially separated laser beams to the target. Through cross-correlation between the slow-time PA profiles measured from the two beams, the speed and direction of transverse flow are simultaneously derived from the magnitude and sign of the time shift, respectively. Transverse flows in the range of 0.50 to 6.84 mm/s are accurately measured using an aqueous suspension of 10-μm-diameter microspheres, and the root-mean-squared measurement accuracy is quantified to be 0.22 mm/s. The flow measurements are independent of the particle size for flows in the velocity range of 0.55 to 6.49 mm/s, which was demonstrated experimentally using three different sizes of microspheres (diameters: 3, 6, and 10 μm). The measured flow velocity follows an expected parabolic distribution along the depth direction perpendicular to the flow. Both maximum and minimum measurable velocities are investigated for varied distances between the two beams and varied total time for one measurement. This technique shows an accuracy of 0.35 mm/s at 0.3-mm depth in scattering chicken breast, making it promising for measuring flow in biological tissue.
Modeling of the Mode S tracking system in support of aircraft safety research
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1982-01-01
This report collects, documents, and models data relating the expected accuracies of tracking variables to be obtained from the FAA's Mode S Secondary Surveillance Radar system. The data include measured range and azimuth to the tracked aircraft plus the encoded altitude transmitted via the Mode S data link. A brief summary is made of the Mode S system status and its potential applications for aircraft safety improvement including accident analysis. FAA flight test results are presented demonstrating Mode S range and azimuth accuracy and error characteristics and comparing Mode S to the current ATCRBS radar tracking system. Data are also presented that describe the expected accuracy and error characteristics of encoded altitude. These data are used to formulate mathematical error models of the Mode S variables and encoded altitude. A brief analytical assessment is made of the real-time tracking accuracy available from using Mode S and how it could be improved with down-linked velocity.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorensen, J; Duran, C; Stingo, F
Purpose: To characterize the effect of virtual monochromatic reconstructions on several commonly used texture analysis features in DECT of the chest. Further, to assess the effect of monochromatic energy levels on the ability of these textural features to identify tissue types. Methods: 20 consecutive patients underwent chest CTs for evaluation of lung nodules using Siemens Somatom Definition Flash DECT. Virtual monochromatic images were constructed at 10keV intervals from 40–190keV. For each patient, an ROI delineated the lesion under investigation, and cylindrical ROI’s were placed within 5 different healthy tissues (blood, fat, muscle, lung, and liver). Several histogram- and Grey Levelmore » Cooccurrence Matrix (GLCM)-based texture features were then evaluated in each ROI at each energy level. As a means of validation, these feature values were then used in a random forest classifier to attempt to identify the tissue types present within each ROI. Their predictive accuracy at each energy level was recorded. Results: All textural features changed considerably with virtual monochromatic energy, particularly below 70keV. Most features exhibited a global minimum or maximum around 80keV, and while feature values changed with energy above this, patient ranking was generally unaffected. As expected, blood demonstrated the lowest inter-patient variability, for all features, while lung lesions (encompassing many different pathologies) exhibited the highest. The accuracy of these features in identifying tissues (76% accuracy) was highest at 80keV, but no clear relationship between energy and classification accuracy was found. Two common misclassifications (blood vs liver and muscle vs fat) accounted for the majority (24 of the 28) errors observed. Conclusion: All textural features were highly dependent on virtual monochromatic energy level, especially below 80keV, and were more stable above this energy. However, in a random forest model, these commonly used features were able to reliably differentiate between most tissues types regardless of energy level. Dr Godoy has received a dual-energy CT research grant from Siemens Healthcare. That grant did not directly fund this research.« less
Water surface elevation from the upcoming SWOT mission under different flows conditions
NASA Astrophysics Data System (ADS)
Domeneghetti, Alessio; Schumann, Guy J. P.; Wei, Rui; Frasson, Renato P. M.; Durand, Michael; Pavelsky, Tamlin; Castellarin, Attilio; Brath, Armando
2017-04-01
The upcoming SWOT (Surface Water and Ocean Topography) satellite mission will provide unprecedented bi-dimensional observations of terrestrial water surface heights along rivers wider than 100m. Despite the literature reports several activities showing possible uses of SWOT products, potential and limitations of satellite observations still remain poorly understood and investigated. We present one of the first analyses regarding the spatial observation of water surface elevation expected from SWOT for a 140 km reach of the middle-lower portion of the Po River, in Northern Italy. The river stretch is characterized by a main channel varying from 100-500 m in width and a floodplain delimited by a system of major embankments that can be as wide as 5 km. The reconstruction of the hydraulic behavior of the Po River is performed by means of a quasi-2D model built with detailed topographic and bathymetric information (LiDAR, 2m resolution), while the simulation of remotely sensed hydrometric data is performed with a SWOT simulator that mimics the satellite sensor characteristics. Referring to water surface elevations associated with different flow conditions (maximum, minimum and average flow) this work characterizes the spatial observations provided by SWOT and highlights the strengths and limitations of the expected products. The analysis provides a robust reference for spatial water observations that will be available from SWOT and assesses possible effects of river embankments, river width and river topography under different hydraulic conditions. Results of the study characterize the expected accuracy of the upcoming SWOT mission and provide additional insights towards the appropriate exploitation of future hydrological observations.
Computational analysis of Ebolavirus data: prospects, promises and challenges.
Michaelis, Martin; Rossman, Jeremy S; Wass, Mark N
2016-08-15
The ongoing Ebola virus (also known as Zaire ebolavirus, a member of the Ebolavirus family) outbreak in West Africa has so far resulted in >28000 confirmed cases compared with previous Ebolavirus outbreaks that affected a maximum of a few hundred individuals. Hence, Ebolaviruses impose a much greater threat than we may have expected (or hoped). An improved understanding of the virus biology is essential to develop therapeutic and preventive measures and to be better prepared for future outbreaks by members of the Ebolavirus family. Computational investigations can complement wet laboratory research for biosafety level 4 pathogens such as Ebolaviruses for which the wet experimental capacities are limited due to a small number of appropriate containment laboratories. During the current West Africa outbreak, sequence data from many Ebola virus genomes became available providing a rich resource for computational analysis. Here, we consider the studies that have already reported on the computational analysis of these data. A range of properties have been investigated including Ebolavirus evolution and pathogenicity, prediction of micro RNAs and identification of Ebolavirus specific signatures. However, the accuracy of the results remains to be confirmed by wet laboratory experiments. Therefore, communication and exchange between computational and wet laboratory researchers is necessary to make maximum use of computational analyses and to iteratively improve these approaches. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Quality assurance of dynamic parameters in volumetric modulated arc therapy.
Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N
2012-07-01
The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Three tests (for gantry position-dose delivery synchronisation, gantry speed-dose delivery synchronisation and MLC leaf speed and positions) were performed. The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the "beginning" and "end" errors. For MLC position verification, the maximum error was -2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. This experiment demonstrates that the variables and parameters of the Synergy S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC.
Application of new radio tracking data types to critical spacecraft navigation problems
NASA Technical Reports Server (NTRS)
Ondrasik, V. J.; Rourke, K. H.
1972-01-01
Earth-based radio tracking data types are considered, which involve simultaneous or nearly simultaneous spacecraft tracking from widely separated tracking stations. These data types are conventional tracking instrumentation analogs of the very long baseline interferometry (VLBI) of radio astronomy-hence the name quasi-VLBI. A preliminary analysis of quasi-VLBI is presented using simplified tracking data models. The results of accuracy analyses are presented for a representative mission, Viking 1975. The results indicate that, contingent on projected tracking system accuracy, quasi-VLBI can be expected to significantly improve navigation performance over that expected from conventional tracking data types.
Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy
ERIC Educational Resources Information Center
Fukaya, Tatsushi
2013-01-01
The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…
Controlling an avatar by thought using real-time fMRI
NASA Astrophysics Data System (ADS)
Cohen, Ori; Koppel, Moshe; Malach, Rafael; Friedman, Doron
2014-06-01
Objective. We have developed a brain-computer interface (BCI) system based on real-time functional magnetic resonance imaging (fMRI) with virtual reality feedback. The advantage of fMRI is the relatively high spatial resolution and the coverage of the whole brain; thus we expect that it may be used to explore novel BCI strategies, based on new types of mental activities. However, fMRI suffers from a low temporal resolution and an inherent delay, since it is based on a hemodynamic response rather than electrical signals. Thus, our objective in this paper was to explore whether subjects could perform a BCI task in a virtual environment using our system, and how their performance was affected by the delay. Approach. The subjects controlled an avatar by left-hand, right-hand and leg motion or imagery. The BCI classification is based on locating the regions of interest (ROIs) related with each of the motor classes, and selecting the ROI with maximum average values online. The subjects performed a cue-based task and a free-choice task, and the analysis includes evaluation of the performance as well as subjective reports. Main results. Six subjects performed the task with high accuracy when allowed to move their fingers and toes, and three subjects achieved high accuracy using imagery alone. In the cue-based task the accuracy was highest 8-12 s after the trigger, whereas in the free-choice task the subjects performed best when the feedback was provided 6 s after the trigger. Significance. We show that subjects are able to perform a navigation task in a virtual environment using an fMRI-based BCI, despite the hemodynamic delay. The same approach can be extended to other mental tasks and other brain areas.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Model for forecasting Olea europaea L. airborne pollen in South-West Andalusia, Spain
NASA Astrophysics Data System (ADS)
Galán, C.; Cariñanos, Paloma; García-Mozo, Herminia; Alcázar, Purificación; Domínguez-Vilches, Eugenio
Data on predicted average and maximum airborne pollen concentrations and the dates on which these maximum values are expected are of undoubted value to allergists and allergy sufferers, as well as to agronomists. This paper reports on the development of predictive models for calculating total annual pollen output, on the basis of pollen and weather data compiled over the last 19 years (1982-2000) for Córdoba (Spain). Models were tested in order to predict the 2000 pollen season; in addition, and in view of the heavy rainfall recorded in spring 2000, the 1982-1998 data set was used to test the model for 1999. The results of the multiple regression analysis show that the variables exerting the greatest influence on the pollen index were rainfall in March and temperatures over the months prior to the flowering period. For prediction of maximum values and dates on which these values might be expected, the start of the pollen season was used as an additional independent variable. Temperature proved the best variable for this prediction. Results improved when the 5-day moving average was taken into account. Testing of the predictive model for 1999 and 2000 yielded fairly similar results. In both cases, the difference between expected and observed pollen data was no greater than 10%. However, significant differences were recorded between forecast and expected maximum and minimum values, owing to the influence of rainfall during the flowering period.
Barteselli, Giulio; Bartsch, Dirk-Uwe; Viola, Francesco; Mojana, Francesca; Pellegrini, Marco; Hartmann, Kathrin I; Benatti, Eleonora; Leicht, Simon; Ratiglia, Roberto; Staurenghi, Giovanni; Weinreb, Robert N; Freeman, William R
2013-09-01
To evaluate temporal changes and predictors of accuracy in the alignment between simultaneous near-infrared image and optical coherence tomography (OCT) scan on the Heidelberg Spectralis using a model eye. Laboratory investigation. After calibrating the device, 6 sites performed weekly testing of the alignment for 12 weeks using a model eye. The maximum error was compared with multiple variables to evaluate predictors of inaccurate alignment. Variables included the number of weekly scanned patients, total number of OCT scans and B-scans performed, room temperature and its variation, and working time of the scanning laser. A 4-week extension study was subsequently performed to analyze short-term changes in the alignment. The average maximum error in the alignment was 15 ± 6 μm; the greatest error was 35 μm. The error increased significantly at week 1 (P = .01), specifically after the second imaging study (P < .05); reached a maximum after the eighth patient (P < .001); and then varied randomly over time. Predictors for inaccurate alignment were temperature variation and scans per patient (P < .001). For each 1 unit of increase in temperature variation, the estimated increase in maximum error was 1.26 μm. For the average number of scans per patient, each increase of 1 unit increased the error by 0.34 μm. Overall, the accuracy of the Heidelberg Spectralis was excellent. The greatest error happened in the first week after calibration, and specifically after the second imaging study. To improve the accuracy, room temperature should be kept stable and unnecessary scans should be avoided. The alignment of the device does not need to be checked on a regular basis in the clinical setting, but it should be checked after every other patient for more precise research purposes. Published by Elsevier Inc.
Manifestations of Proprioception During Vertical Jumps to Specific Heights
Struzik, Artur; Pietraszewski, Bogdan; Winiarski, Sławomir; Juras, Grzegorz; Rokita, Andrzej
2017-01-01
Abstract Artur, S, Bogdan, P, Kawczyński, A, Winiarski, S, Grzegorz, J, and Andrzej, R. Manifestations of proprioception during vertical jumps to specific heights. J Strength Cond Res 31(6): 1694–1701, 2017—Jumping and proprioception are important abilities in many sports. The efficiency of the proprioceptive system is indirectly related to jumps performed at specified heights. Therefore, this study recorded the ability of young athletes who play team sports to jump to a specific height compared with their maximum ability. A total of 154 male (age: 14.8 ± 0.9 years, body height: 181.8 ± 8.9 cm, body weight: 69.8 ± 11.8 kg, training experience: 3.8 ± 1.7 years) and 151 female (age: 14.1 ± 0.8 years, body height: 170.5 ± 6.5 cm, body weight: 60.3 ± 9.4 kg, training experience: 3.7 ± 1.4 years) team games players were recruited for this study. Each participant performed 2 countermovement jumps with arm swing to 25, 50, 75, and 100% of the maximum height. Measurements were performed using a force plate. Jump height and its accuracy with respect to a specified height were calculated. The results revealed no significant differences in jump height and its accuracy to the specified heights between the groups (stratified by age, sex, and sport). Individuals with a higher jumping accuracy also exhibited greater maximum jump heights. Jumps to 25% of the maximum height were approximately 2 times higher than the target height. The decreased jump accuracy to a specific height when attempting to jump to lower heights should be reduced with training, particularly among athletes who play team sports. These findings provide useful information regarding the proprioceptive system for team sport coaches and may shape guidelines for training routines by working with submaximal loads. PMID:28538322
An Earthquake Source Sensitivity Analysis for Tsunami Propagation in the Eastern Mediterranean
NASA Astrophysics Data System (ADS)
Necmioglu, Ocal; Meral Ozel, Nurcan
2013-04-01
An earthquake source parameter sensitivity analysis for tsunami propagation in the Eastern Mediterranean has been performed based on 8 August 1303 Crete and Dodecanese Islands earthquake resulting in destructive inundation in the Eastern Mediterranean. The analysis involves 23 cases describing different sets of strike, dip, rake and focal depth, while keeping the fault area and displacement, thus the magnitude, same. The main conclusions of the evaluation are drawn from the investigation of the wave height distributions at Tsunami Forecast Points (TFP). The earthquake vs. initial tsunami source parameters comparison indicated that the maximum initial wave height values correspond in general to the changes in rake angle. No clear depth dependency is observed within the depth range considered and no strike angle dependency is observed in terms of amplitude change. Directivity sensitivity analysis indicated that for the same strike and dip, 180° shift in rake may lead to 20% change in the calculated tsunami wave height. Moreover, an approximately 10 min difference in the arrival time of the initial wave has been observed. These differences are, however, greatly reduced in the far field. The dip sensitivity analysis, performed separately for thrust and normal faulting, has both indicated that an increase in the dip angle results in the decrease of the tsunami wave amplitude in the near field approximately 40%. While a positive phase shift is observed, the period and the shape of the initial wave stays nearly the same for all dip angles at respective TFPs. These affects are, however, not observed at the far field. The resolution of the bathymetry, on the other hand, is a limiting factor for further evaluation. Four different cases were considered for the depth sensitivity indicating that within the depth ranges considered (15-60 km), the increase of the depth has only a smoothing effect on the synthetic tsunami wave height measurements at the selected TFPs. The strike sensitivity analysis showed clear phase shift with respect to the variation of the strike angles, without leading to severe variation of the initial and maximum waves at locations considered. Travel time maps for two cases corresponding to difference in the strike value (60° vs 150°) presented a more complex wave propagation for the case with 60° strike angle due to the fact that the normal of the fault plane is orthogonal to the main bathymetric structure in the region, namely the Eastern section of the Hellenic Arc between Crete and Rhodes Islands. For a given set of strike, dip and focal depth parameters, the effect of the variation in the rake angle has been evaluated in the rake sensitivity analysis. A waveform envelope composed of symmetric synthetic recordings at one TFPs could be clearly observed as a result of rake angle variations in 0-180° range. This could also lead to the conclusion that for a given magnitude (fault size and displacement), the expected maximum and minimum tsunami wave amplitudes could be evaluated as a waveform envelope rather limited to a single point of time or amplitude. The Evaluation of the initial wave arrival times follows an expected pattern controlled by the distance, wheras maximum wave arrival time distribution presents no clear pattern. Nevertheless, the distribution is rather concentrated in time domain for some TFPs. Maximum positive and minimum negative wave amplitude distributions indicates a broader range for a subgroup of TFPs, wheras for the remaining TFPs the distributions are narrow. Any deviation from the expected trend of calculating narrower ranges of amplitude distributions could be interpreted as the result o the bathymetry and focusing effects. As similar studies conducted in the different parts of the globe indicated, the main characteristics of the tsunami propagation are unique for each basin. It should be noted, however, that the synthetic measurements obtained at the TFPs in the absence of high-resolution bathymetric data, should be considered only an overall guidance. The results indicate the importance of the accuracy of earthquake source parameters for reliable tsunami predictions and the need for high-resolution bathymetric data to be able to perform calculations with higher accuracy. On the other hand, this study did not address other parameters, such as heterogeneous slip distribution and rupture duration, which affect the tsunami initiation and propagation process.
On the accuracy of the Padé-resummed master equation approach to dissipative quantum dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-Ta; Reichman, David R.; Berkelbach, Timothy C.
2016-04-21
Well-defined criteria are proposed for assessing the accuracy of quantum master equations whose memory functions are approximated by Padé resummation of the first two moments in the electronic coupling. These criteria partition the parameter space into distinct levels of expected accuracy, ranging from quantitatively accurate regimes to regions of parameter space where the approach is not expected to be applicable. Extensive comparison of Padé-resummed master equations with numerically exact results in the context of the spin–boson model demonstrates that the proposed criteria correctly demarcate the regions of parameter space where the Padé approximation is reliable. The applicability analysis we presentmore » is not confined to the specifics of the Hamiltonian under consideration and should provide guidelines for other classes of resummation techniques.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Zhou, Wu
2014-01-01
The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
ERIC Educational Resources Information Center
Wu, Yi-Fang
2015-01-01
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Sensing multiple ligands with single receptor
NASA Astrophysics Data System (ADS)
Singh, Vijay; Nemenman, Ilya
2015-03-01
Cells use surface receptors to measure concentrations of external ligand molecules. Limits on the accuracy of such sensing are well-known for the scenario where concentration of one molecular species is being determined by one receptor [Endres]. However, in more realistic scenarios, a cognate (high-affinity) ligand competes with many non-cognate (low-affinity) ligands for binding to the receptor. We analyze effects of this competition on the accuracy of sensing. We show that maximum-likelihood statistical inference allows determination of concentrations of multiple ligands, cognate and non-cognate, by the same receptor concurrently. While it is unclear if traditional biochemical circuitry downstream of the receptor can implement such inference exactly, we show that an approximate inference can be performed by coupling the receptor to a kinetic proofreading cascade. We characterize the accuracy of such kinetic proofreading sensing in comparison to the exact maximum-likelihood approach. We acknowledge the support from the James S. McDonnell Foundation and the Human Frontier Science Program.
Ma, Xin; Guo, Jing; Sun, Xiao
2015-01-01
The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.
High accuracy autonomous navigation using the global positioning system (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
Kappa and Rater Accuracy: Paradigms and Parameters.
Conger, Anthony J
2017-12-01
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa (κ). Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another (concordance), using both nonstochastic and stochastic category membership. Using a probability model to express category assignments in terms of rater accuracy and random error, it is shown that observed agreement (Po) depends only on rater accuracy and number of categories; however, expected agreement (Pe) and κ depend additionally on category frequencies. Moreover, category frequencies affect Pe and κ solely through the variance of the category proportions, regardless of the specific frequencies underlying the variance. Paradoxically, some judgment paradigms involving stochastic categories are shown to yield higher κ values than their nonstochastic counterparts. Using the stated probability model, assignments to categories were generated for 552 combinations of paradigms, rater and category parameters, category frequencies, and number of stimuli. Observed means and standard errors for Po, Pe, and κ were fully consistent with theory expectations. Guidelines for interpretation of rater accuracy and reliability are offered, along with a discussion of alternatives to the basic model.
Buzanskas, Marcos Eli; Ventura, Ricardo Vieira; Seleguim Chud, Tatiane Cristina; Bernardes, Priscila Arrigucci; Santos, Daniel Jordan de Abreu; Regitano, Luciana Correia de Almeida; de Alencar, Maurício Mello; Mudadu, Maurício de Alvarenga; Zanella, Ricardo; da Silva, Marcos Vinícius Gualberto Barbosa; Li, Changxi; Schenkel, Flavio Schramm; Munari, Danísio Prado
2017-01-01
The aim of this study was to evaluate the level of introgression of breeds in the Canchim (CA: 62.5% Charolais—37.5% Zebu) and MA genetic group (MA: 65.6% Charolais—34.4% Zebu) cattle using genomic information on Charolais (CH), Nelore (NE), and Indubrasil (IB) breeds. The number of animals used was 395 (CA and MA), 763 (NE), 338 (CH), and 37 (IB). The Bovine50SNP BeadChip from Illumina panel was used to estimate the levels of introgression of breeds considering the Maximum likelihood, Bayesian, and Single Regression method. After genotype quality control, 32,308 SNPs were considered in the analysis. Furthermore, three thresholds to prune out SNPs in linkage disequilibrium higher than 0.10, 0.05, and 0.01 were considered, resulting in 15,286, 7,652, and 1,582 SNPs, respectively. For k = 2, the proportion of taurine and indicine varied from the expected proportion based on pedigree for all methods studied. For k = 3, the Regression method was able to differentiate the animals in three main clusters assigned to each purebred breed, showing more reasonable according to its biological viewpoint. Analyzing the data considering k = 2 seems to be more appropriate for Canchim-MA animals due to its biological interpretation. The usage of 32,308 SNPs in the analyses resulted in similar findings between the estimated and expected breed proportions. Using the Regression approach, a contribution of Indubrasil was observed in Canchim-MA when k = 3 was considered. Genetic parameter estimation could account for this breed composition information as a source of variation in order to improve the accuracy of genetic models. Our findings may help assemble appropriate reference populations for genomic prediction for Canchim-MA in order to improve prediction accuracy. Using the information on the level of introgression in each individual could also be useful in breeding or crossing design to improve individual heterosis in crossbred cattle. PMID:28182737
Buzanskas, Marcos Eli; Ventura, Ricardo Vieira; Seleguim Chud, Tatiane Cristina; Bernardes, Priscila Arrigucci; Santos, Daniel Jordan de Abreu; Regitano, Luciana Correia de Almeida; Alencar, Maurício Mello de; Mudadu, Maurício de Alvarenga; Zanella, Ricardo; da Silva, Marcos Vinícius Gualberto Barbosa; Li, Changxi; Schenkel, Flavio Schramm; Munari, Danísio Prado
2017-01-01
The aim of this study was to evaluate the level of introgression of breeds in the Canchim (CA: 62.5% Charolais-37.5% Zebu) and MA genetic group (MA: 65.6% Charolais-34.4% Zebu) cattle using genomic information on Charolais (CH), Nelore (NE), and Indubrasil (IB) breeds. The number of animals used was 395 (CA and MA), 763 (NE), 338 (CH), and 37 (IB). The Bovine50SNP BeadChip from Illumina panel was used to estimate the levels of introgression of breeds considering the Maximum likelihood, Bayesian, and Single Regression method. After genotype quality control, 32,308 SNPs were considered in the analysis. Furthermore, three thresholds to prune out SNPs in linkage disequilibrium higher than 0.10, 0.05, and 0.01 were considered, resulting in 15,286, 7,652, and 1,582 SNPs, respectively. For k = 2, the proportion of taurine and indicine varied from the expected proportion based on pedigree for all methods studied. For k = 3, the Regression method was able to differentiate the animals in three main clusters assigned to each purebred breed, showing more reasonable according to its biological viewpoint. Analyzing the data considering k = 2 seems to be more appropriate for Canchim-MA animals due to its biological interpretation. The usage of 32,308 SNPs in the analyses resulted in similar findings between the estimated and expected breed proportions. Using the Regression approach, a contribution of Indubrasil was observed in Canchim-MA when k = 3 was considered. Genetic parameter estimation could account for this breed composition information as a source of variation in order to improve the accuracy of genetic models. Our findings may help assemble appropriate reference populations for genomic prediction for Canchim-MA in order to improve prediction accuracy. Using the information on the level of introgression in each individual could also be useful in breeding or crossing design to improve individual heterosis in crossbred cattle.
Fast-PPP assessment in European and equatorial region near the solar cycle maximum
NASA Astrophysics Data System (ADS)
Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume
2014-05-01
The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.
Rettig, Oliver; Krautwurst, Britta; Maier, Michael W; Wolf, Sebastian I
2015-12-09
Surgical interventions at the shoulder may alter function of the shoulder complex. Clinically, the outcome can be assessed by universal goniometry. Marker-based motion capture may not resemble these results due to differing angle definitions. The clinical inspection of bilateral arm abduction for assessing shoulder dysfunction is performed with a marker based 3D optical measurement method. An anatomical zero position of shoulder pose is proposed to determine absolute angles according to the Neutral-0-Method as used in orthopedic context. Static shoulder positions are documented simultaneously by 3D marker tracking and universal goniometry in 8 young and healthy volunteers. Repetitive bilateral arm abduction movements of at least 150° range of motion are monitored. Similarly a subject with gleno-humeral osteoarthritis is monitored for demonstrating the feasibility of the method and to illustrate possible shoulder dysfunction effects. With mean differences of less than 2°, the proposed anatomical zero position results in good agreement between shoulder elevation/depression angles determined by 3D marker tracking and by universal goniometry in static positions. Lesser agreement is found for shoulder pro-/retraction with systematic deviations of up to 6°. In the bilateral arm abduction movements the volunteers perform a common and specific pattern in clavicula-thoracic and gleno-humeral motion with maximum shoulder angles of 32° elevation, 5° depression and 45° protraction, respectively, whereas retraction is hardly reached. Further, they all show relevant out of (frontal) plane motion with anteversion angles of 30° in overhead position (maximum abduction). With increasing arm anteversion the shoulder is increasingly retroverted, with a maximum of 20° retroversion. The subject with gleno-humeral osteoarthritis shows overall less shoulder abduction range of motion but with increased out-of-plane movement during abduction. The proposed anatomical zero definition for shoulder pose fills the missing link for determining absolute joint angles for shoulder elevation/depression and pro-/retraction. For elevation-/depression the accuracy suits clinical expectations very well with mean differences less than 2° and limits of agreement of 8.6° whereas for pro-/retraction the accuracy in individual cases may be inferior with limits of agreement of up to 24.6°. This has critically to be kept in mind when applying this concept to shoulder intervention studies.
Improved design of support for large aperture space lightweight mirror
NASA Astrophysics Data System (ADS)
Wang, Chao; Ruan, Ping; Liu, Qimin
2013-08-01
In order to design a kind of rational large aperture space mirror which can adapt to the space gravity and thermal environment, by taking the choice of material, the lightweight of the mirror and the design of support into account in detail, a double-deck structure with traditional flexible hinge was designed, then the analytical mathematical model of the mirror system was established. The design adopts six supports on back. in order to avoid the constraints, mirror is connected to three middle transition pieces through six flexible hinges, and then the three transition pieces are connected to support plate through another three flexible hinges. However, the initial structure is unable to reach the expected design target and needs to be made further adjustments. By improving and optimizing the original structure, a new type of flexible hinge in the shape of the letter A is designed finally. Compared with the traditional flexible hinge structure, the new structure is simpler and has less influence on the surface figure accuracy of mirror. By using the finite element analysis method, the static and dynamic characteristics as well as the thermal characteristics of the mirror system are analyzed. Analysis results show that the maximum PV value is 37 nm and the maximum RMS value is 10.4 nm when gravity load is applied. Furthermore, the maximum PV value is 46 nm and the maximum RMS value is 10.5 nm under the load case of gravity coupled with 4℃ uniform temperature rise. The results satisfy the index of optical design. The first order natural frequency of the mirror component is 130 Hz according to the conclusion obtained by modal analytical solution, so the mirror structure has high enough fundamental frequency. And, the structural strength can meet the demand under the overload and the random vibration environment respectively. It indicates that the mirror component structure has enough dynamic, static stiffness and thermal stability, meeting the design requirements.
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
ERIC Educational Resources Information Center
Frick, Bernd; Maihaus, Michael
2016-01-01
Using two representative samples of some 74,000 students and 11,000 graduates, respectively, we analyse the accuracy of students' wage expectations given their individual characteristics. We find that students are aware of the effects of most of their own characteristics, as a large number of determinants of expected and realised salaries do not…
Borba, Alexandre Meireles; José da Silva, Everton; Fernandes da Silva, André Luis; Han, Michael D; da Graça Naclério-Homem, Maria; Miloro, Michael
2018-01-12
To verify predicted versus obtained surgical movements in 2-dimensional (2D) and 3-dimensional (3D) measurements and compare the equivalence between these methods. A retrospective observational study of bimaxillary orthognathic surgeries was performed. Postoperative cone-beam computed tomographic (CBCT) scans were superimposed on preoperative scans and a lateral cephalometric radiograph was generated from each CBCT scan. After identification of the sella, nasion, and upper central incisor tip landmarks on 2D and 3D images, actual and planned movements were compared by cephalometric measurements. One-sample t test was used to statistically evaluate results, with expected mean discrepancy values ranging from 0 to 2 mm. Equivalence of 2D and 3D values was compared using paired t test. The final sample of 46 cases showed by 2D cephalometry that differences between actual and planned movements in the horizontal axis were statistically relevant for expected means of 0, 0.5, and 2 mm without relevance for expected means of 1 and 1.5 mm; vertical movements were statistically relevant for expected means of 0 and 0.5 mm without relevance for expected means of 1, 1.5, and 2 mm. For 3D cephalometry in the horizontal axis, there were statistically relevant differences for expected means of 0, 1.5, and 2 mm without relevance for expected means of 0.5 and 1 mm; vertical movements showed statistically relevant differences for expected means of 0, 0.5, 1.5 and 2 mm without relevance for the expected mean of 1 mm. Comparison of 2D and 3D values displayed statistical differences for the horizontal and vertical axes. Comparison of 2D and 3D surgical outcome assessments should be performed with caution because there seems to be a difference in acceptable levels of accuracy between these 2 methods of evaluation. Moreover, 3D accuracy studies should no longer rely on a 2-mm level of discrepancy but on a 1-mm level. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Why Hasn't Earth Warmed as Much as Expected?
NASA Technical Reports Server (NTRS)
Schwartz, Stephen E.; Charlson, Robert J.; Kahn, Ralph A.; Ogren, John A.; Rodhe, Henning
2010-01-01
The observed increase in global mean surface temperature (GMST) over the industrial era is less than 40% of that expected from observed increases in long-lived greenhouse gases together with the best-estimate equilibrium climate sensitivity given by the 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Possible reasons for this warming discrepancy are systematically examined here. The warming discrepancy is found to be due mainly to some combination of two factors: the IPCC best estimate of climate sensitivity being too high and/or the greenhouse gas forcing being partially offset by forcing by increased concentrations of atmospheric aerosols; the increase in global heat content due to thermal disequilibrium accounts for less than 25% of the discrepancy, and cooling by natural temperature variation can account for only about 15 %. Current uncertainty in climate sensitivity is shown to preclude determining the amount of future fossil fuel CO2 emissions that would be compatible with any chosen maximum allowable increase in GMST; even the sign of such allowable future emissions is unconstrained. Resolving this situation, by empirical determination of the earth's climate sensitivity from the historical record over the industrial period or through use of climate models whose accuracy is evaluated by their performance over this period, is shown to require substantial reduction in the uncertainty of aerosol forcing over this period.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
Optimizing baryon acoustic oscillation surveys - II. Curvature, redshifts and external data sets
NASA Astrophysics Data System (ADS)
Parkinson, David; Kunz, Martin; Liddle, Andrew R.; Bassett, Bruce A.; Nichol, Robert C.; Vardanyan, Mihran
2010-02-01
We extend our study of the optimization of large baryon acoustic oscillation (BAO) surveys to return the best constraints on the dark energy, building on Paper I of this series by Parkinson et al. The survey galaxies are assumed to be pre-selected active, star-forming galaxies observed by their line emission with a constant number density across the redshift bin. Star-forming galaxies have a redshift desert in the region 1.6 < z < 2, and so this redshift range was excluded from the analysis. We use the Seo & Eisenstein fitting formula for the accuracies of the BAO measurements, using only the information for the oscillatory part of the power spectrum as distance and expansion rate rulers. We go beyond our earlier analysis by examining the effect of including curvature on the optimal survey configuration and updating the expected `prior' constraints from Planck and the Sloan Digital Sky Survey. We once again find that the optimal survey strategy involves minimizing the exposure time and maximizing the survey area (within the instrumental constraints), and that all time should be spent observing in the low-redshift range (z < 1.6) rather than beyond the redshift desert, z > 2. We find that, when assuming a flat universe, the optimal survey makes measurements in the redshift range 0.1 < z < 0.7, but that including curvature as a nuisance parameter requires us to push the maximum redshift to 1.35, to remove the degeneracy between curvature and evolving dark energy. The inclusion of expected other data sets (such as WiggleZ, the Baryon Oscillation Spectroscopic Survey and a stage III Type Ia supernova survey) removes the necessity of measurements below redshift 0.9, and pushes the maximum redshift up to 1.5. We discuss considerations in determining the best survey strategy in light of uncertainty in the true underlying cosmological model.
Measuring orthometric water heights from lightweight Unmanned Aerial Vehicles (UAVs)
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Olesen, Daniel; Jakobsen, Jakob; Reyna-Gutierrez, Jose Antonio; Bauer-Gottwein, Peter
2016-04-01
A better quantitative understanding of hydrologic processes requires better observations of hydrological variables, such as surface water area, water surface level, its slope and its temporal change. However, ground-based measurements of water heights are restricted to the in-situ measuring stations. Hence, the objective of remote sensing hydrology is to retrieve these hydraulic variables from spaceborne and airborne platforms. The forthcoming Surface Water and Ocean Topography (SWOT) satellite mission will be able to acquire water heights with an expected accuracy of 10 centimeters for rivers that are at least 100 m wide. Nevertheless, spaceborne missions will always face the limitations of: i) a low spatial resolution which makes it difficult to separate water from interfering surrounding areas and a tracking of the terrestrial water bodies not able to detect water heights in small rivers or lakes; ii) a limited temporal resolution which limits the ability to determine rapid temporal changes, especially during extremes. Unmanned Aerial Vehicles (UAVs) are one technology able to fill the gap between spaceborne and ground-based observations, ensuring 1) high spatial resolution; 2) tracking of the water bodies better than any satellite technology; 3) timing of the sampling which only depends on the operator 4) flexibility of the payload. Hence, this study focused on categorizing and testing sensors capable of measuring the range between the UAV and the water surface. The orthometric height of the water surface is then retrieved by subtracting the height above water measured by the sensors from the altitude above sea level retrieved by the onboard GPS. The following sensors were tested: a) a radar, b) a sonar c) a laser digital-camera based prototype developed at Technical University of Denmark. The tested sensors comply with the weight constraint of small UAVs (around 1.5 kg). The sensors were evaluated in terms of accuracy, maximum ranging distance and beam divergence. The sonar demonstrated a maximum ranging distance of 10 m, the laser prototype of 15 m, whilst the radar is potentially able to measure the range to water surface from a height up to 50 m. After numerous test flights above a lake with an approximately horizontal water surface, estimation of orthometric water height error, including overall accuracy of the system GPS-sensors, was possible. The RTK GPS system proved able to deliver a relative vertical accuracy better than 5-7 cm. The radar confirmed to have the best reliability with an accuracy which is generally few cm (0.7-1.3% of the ranging distance). Whereas the accuracy of the sonar and laser varies from few cm (0.7-1.6% of the ranging distance) to some tens of cm because sonar measurements are generally influenced by noise and turbulence generated by the propellers of the UAV and the laser prototype is affected by drone vibrations and water waviness. However, the laser prototype demonstrated the lowest beam divergence, which is required to measure unconventional remote sensing targets, such as sinkholes and Mexican cenotes, and to clearly distinguish between rivers and interfering surroundings, such as riparian vegetation.
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, G P; Logan, C M
We have estimated interference from external background radiation for a computed tomography (CT) scanner. Our intention is to estimate the interference that would be expected for the high-resolution SkyScan 1072 desk-top x-ray microtomography system. The SkyScan system uses a Microfocus x-ray source capable of a 10-{micro}m focal spot at a maximum current of 0.1 mA and a maximum energy of 130 kVp. All predictions made in this report assume using the x-ray source at the smallest spot size, maximum energy, and operating at the maximum current. Some of the systems basic geometry that is used for these estimates are: (1)more » Source-to-detector distance: 250 mm, (2) Minimum object-to-detector distance: 40 mm, and (3) Maximum object-to-detector distance: 230 mm. This is a first-order, rough estimate of the quantity of interference expected at the system detector caused by background radiation. The amount of interference is expressed by using the ratio of exposure expected at the detector of the CT system. The exposure values for the SkyScan system are determined by scaling the measured values of an x-ray source and the background radiation adjusting for the difference in source-to-detector distance and current. The x-ray source that was used for these measurements was not the SkyScan Microfocus x-ray tube. Measurements were made using an x-ray source that was operated at the same applied voltage but higher current for better statistics.« less
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
NASA Technical Reports Server (NTRS)
Yanow, Gilbert
1996-01-01
Pyranometer generates output potential of about 300 mV in maximum sunlight. Designed to monitor insolation at accuracy within 5 percent of accuracy of instruments ordinarily used for this purpose. Suitable for use in school laboratories and perhaps in commercial facilities where expense of more precise instrument not justified. Slightly more complex pyranometer intended primarily for use in agricultural setting described in "Inexpensive Meter For Total Solar Radiation" (NPO-16741).
NASA Astrophysics Data System (ADS)
Estes, M. G., Jr.; Insaf, T.; Crosson, W. L.; Al-Hamdan, M. Z.
2017-12-01
Heat exposure metrics (maximum and minimum daily temperatures,) have a close relationship with human health. While meteorological station data provide a good source of point measurements, temporal and spatially consistent temperature data are needed for health studies. Reanalysis data such as the North American Land Data Assimilation System's (NLDAS) 12-km gridded product are an effort to resolve spatio-temporal environmental data issues; the resolution may be too coarse to accurately capture the effects of elevation, mixed land/water areas, and urbanization. As part of this NASA Applied Sciences Program funded project, the NLDAS 12-km air temperature product has been downscaled to 1-km using MODIS Land Surface Temperature patterns. Limited validation of the native 12-km NLDAS reanalysis data has been undertaken. Our objective is to evaluate the accuracy of both the 12-km and 1-km downscaled products using the US Historical Climatology Network station data geographically dispersed across New York State. Statistical methods including correlation, scatterplots, time series and summary statistics were used to determine the accuracy of the remotely-sensed maximum and minimum temperature products. The specific effects of elevation and slope on remotely-sensed temperature product accuracy were determined with 10-m digital elevation data that were used to calculate percent slope and link with the temperature products at multiple scales. Preliminary results indicate the downscaled temperature product improves accuracy over the native 12-km temperature product with average correlation improvements from 0.81 to 0.85 for minimum and 0.71 to 0.79 for maximum temperatures in 2009. However, the benefits vary temporally and geographically. Our results will inform health studies using remotely-sensed temperature products to determine health risk from excessive heat by providing a more robust assessment of the accuracy of the 12-km NLDAS product and additional accuracy gained from the 1-km downscaled product. Also, the results will be shared with the National Weather Service to determine potential benefits to heat warning systems and evaluated for inclusion in the Centers of Disease Control and Prevention (CDC) Environmental Public Health Tracking Network as a resource for the health community.
Utilization of waste heat in trucks for increased fuel economy
NASA Technical Reports Server (NTRS)
Leising, C. J.; Purohit, G. P.; Degrey, S. P.; Finegold, J. G.
1978-01-01
The waste heat utilization concepts include preheating, regeneration, turbocharging, turbocompounding, and Rankine engine compounding. Predictions are based on fuel-air cycle analyses, computer simulation, and engine test data. All options are evaluated in terms of maximum theoretical improvements, but the Diesel and adiabatic Diesel are also compared on the basis of maximum expected improvement and expected improvement over a driving cycle. The study indicates that Diesels should be turbocharged and aftercooled to the maximum possible level. The results reveal that Diesel driving cycle performance can be increased by 20% through increased turbocharging, turbocompounding, and Rankine engine compounding. The Rankine engine compounding provides about three times as much improvement as turbocompounding but also costs about three times as much. Performance for either can be approximately doubled if applied to an adiabatic Diesel.
Accuracy of nursing diagnosis "readiness for enhanced hope" in patients with chronic kidney disease.
Silva, Renan Alves; Melo, Geórgia Alcântara Alencar; Caetano, Joselany Áfio; Lopes, Marcos Venícios Oliveira; Butcher, Howard Karl; Silva, Viviane Martins da
2017-07-06
To analyse the accuracy of the nursing diagnosis readiness for enhanced hope in patients with chronic kidney disease. This is a cross-sectional study with 62 patients in the haemodialysis clinic conducted from August to November 2015. The Hearth Hope Scale was used to create definitions of the defining characteristics of the North American Nursing Diagnosis Association International. We analysed the measures of sensitivity, specificity, predictive value, likelihood ratio, and odds ratio of the defining characteristics of the diagnosis. Of the characteristics, 82.22% presented the diagnosis. The defining characteristics "Expresses the desire to enhance congruency of expectations with desires" and "Expresses the desire to enhance problem solving to meet goals" increased the chance of having the diagnosis by eleven and five, respectively. The characteristics, "Expresses desire to enhance congruency of expectations with desires" and "Expresses desire to enhance problem solving to meet goals" had good accuracy measures.
ERIC Educational Resources Information Center
Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika
2013-01-01
Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…
NASA Astrophysics Data System (ADS)
Fang, W.; Quan, S. H.; Xie, C. J.; Tang, X. F.; Wang, L. L.; Huang, L.
2016-03-01
In this study, a direct-current/direct-current (DC/DC) converter with maximum power point tracking (MPPT) is developed to down-convert the high voltage DC output from a thermoelectric generator to the lower voltage required to charge batteries. To improve the tracking accuracy and speed of the converter, a novel MPPT control scheme characterized by an aggregated dichotomy and gradient (ADG) method is proposed. In the first stage, the dichotomy algorithm is used as a fast search method to find the approximate region of the maximum power point. The gradient method is then applied for rapid and accurate tracking of the maximum power point. To validate the proposed MPPT method, a test bench composed of an automobile exhaust thermoelectric generator was constructed for harvesting the automotive exhaust heat energy. Steady-state and transient tracking experiments under five different load conditions were carried out using a DC/DC converter with the proposed ADG and with three traditional methods. The experimental results show that the ADG method can track the maximum power within 140 ms with a 1.1% error rate when the engine operates at 3300 rpm@71 NM, which is superior to the performance of the single dichotomy method, the single gradient method and the perturbation and observation method from the viewpoint of improved tracking accuracy and speed.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
β-hCG resolution times during expectant management of tubal ectopic pregnancies.
Mavrelos, D; Memtsa, M; Helmy, S; Derdelis, G; Jauniaux, E; Jurkovic, D
2015-05-21
A subset of women with a tubal ectopic pregnancy can be safely managed expectantly. Expectant management involves a degree of disruption with hospital visits to determine serum β-hCG (β-human chorionic gonadotrophin) concentration until the pregnancy test becomes negative and expectant management is considered complete. The length of time required for the pregnancy test to become negative and the parameters that influence this interval have not been described. Information on the likely length of follow up would be useful for women considering expectant management of their tubal ectopic pregnancy. This was a retrospective study at a tertiary referral center in an inner city London Hospital. We included women who were diagnosed with a tubal ectopic pregnancy by transvaginal ultrasound between March 2009 and March 2014. During the study period 474 women were diagnosed with a tubal ectopic pregnancy and 256 (54 %) of them fulfilled our management criteria for expectant management. A total of 158 (33 %) women had successful expectant management and in those cases we recorded the diameter of the ectopic pregnancy (mm), the maximum serum β-hCG (IU/L) and levels during follow up until resolution as well as the interval to resolution (days). The median interval from maximum serum β-hCG concentration to resolution was 18.0 days (IQR 11.0-28.0). The maximum serum β-hCG concentration and the rate of decline of β-hCG were independently associated with the length of follow up. Women's age and size of ectopic pregnancy did not have significant effects on the length of follow up. Women undergoing expectant management of ectopic pregnancy can be informed that the likely length of follow up is under 3 weeks and that it positively correlates with initial β-hCG level at the time of diagnosis.
Value for money: protecting endangered species on Danish heathland.
Strange, Niels; Jacobsen, Jette B; Thorsen, Bo J; Tarp, Peter
2007-11-01
Biodiversity policies in the European Union (EU) are mainly implemented through the Birds and Habitats Directives as well as the establishment of Natura 2000, a network of protected areas throughout the EU. Considerable resources must be allocated for fulfilling the Directives and the question of optimal allocation is as important as it is difficult. In general, economic evaluations of conservation targets at most consider the costs and seldom the welfare economic benefits. In the present study, we use welfare economic benefit estimates concerning the willingness-to-pay for preserving endangered species and for the aggregate area of heathland preserved in Denmark. Similarly, we obtain estimates of the welfare economic cost of habitat restoration and maintenance. Combining these welfare economic measures with expected species coverage, we are able to estimate the potential welfare economic contribution of a conservation network. We compare three simple nonprobabilistic strategies likely to be used in day-to-day policy implementation: i) a maximum selected area strategy, ii) a hotspot selection strategy, and iii) a minimizing cost strategy, and two more advanced and informed probabilistic strategies: i) a maximum expected coverage strategy and ii) a strategy for maximum expected welfare economic gain. We show that the welfare economic performance of the strategies differ considerably. The comparison between the expected coverage and expected welfare shows that for the case considered, one may identify an optimal protection level above which additional coverage only comes at increasing welfare economic loss.
Leonid predictions for the period 2001-2100
NASA Astrophysics Data System (ADS)
Maslov, Mikhail
2007-02-01
This article provides a set of summaries of what to expect from the Leonid meteor shower for each year of the period 2001-2100. Each summary contains the moments of maximum/maxima, their expected intensity and some comments about average meteor brightness during them. Special attention was paid to background (traditional) maxima, which are characterized with their expected times and intensities.
40 CFR 60.116b - Monitoring of operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... range. (e) Available data on the storage temperature may be used to determine the maximum true vapor...: (i) Available data on the Reid vapor pressure and the maximum expected storage temperature based on... Liquid Storage Vessels (Including Petroleum Liquid Storage Vessels) for Which Construction...
Detecting the supernova breakout burst in terrestrial neutrino detectors
Wallace, Joshua; Burrows, Adam; Dolence, Joshua C.
2016-02-01
Here, we calculate the distance-dependent performance of a few representative terrestrial neutrino detectors in detecting and measuring the properties of the ν e breakout burst light curve in a Galactic core-collapse supernova. The breakout burst is a signature phenomenon of core collapse and offers a probe into the stellar core through collapse and bounce. We also examine cases of no neutrino oscillations and oscillations due to normal and inverted neutrino-mass hierarchies. For the normal hierarchy, other neutrino flavors emitted by the supernova overwhelm the νe signal, making a detection of the breakout burst difficult. Furthermore, for the inverted hierarchy (IH),more » some detectors at some distances should be able to see the ν e breakout burst peak and measure its properties. For the IH, the maximum luminosity of the breakout burst can be measured at 10 kpc to accuracies of ~30% for Hyper-Kamiokande (Hyper-K) and ~60% for the Deep Underground Neutrino Experiment (DUNE). Super-Kamiokande (Super-K) and Jiangmen Underground Neutrino Observatory (JUNO) lack the mass needed to make an accurate measurement. For the IH, the time of the maximum luminosity of the breakout burst can be measured in Hyper-K to an accuracy of ~3 ms at 7 kpc, in DUNE to ~2 ms at 4 kpc, and JUNO and Super-K can measure the time of maximum luminosity to an accuracy of ~2 ms at 1 kpc. Detector backgrounds in IceCube render a measurement of the νe breakout burst unlikely. For the IH, a measurement of the maximum luminosity of the breakout burst could be used to differentiate between nuclear equations of state.« less
NASA Astrophysics Data System (ADS)
Siahpolo, Navid; Gerami, Mohsen; Vahdani, Reza
2016-09-01
Evaluating the capability of elastic Load Patterns (LPs) including seismic codes and modified LPs such as Method of Modal Combination (MMC) and Upper Bound Pushover Analysis (UBPA) in estimating inelastic demands of non deteriorating steel moment frames is the main objective of this study. The Static Nonlinear Procedure (NSP) is implemented and the results of NSP are compared with Nonlinear Time History Analysis (NTHA). The focus is on the effects of near-fault pulselike ground motions. The primary demands of interest are the maximum floor displacement, the maximum story drift angle over the height, the maximum global ductility, the maximum inter-story ductility and the capacity curves. Five types of LPs are selected and the inelastic demands are calculated under four levels of inter-story target ductility ( μ t) using OpenSees software. The results show that the increase in μ t coincides with the migration of the peak demands over the height from the top to the bottom stories. Therefore, all LPs estimate the story lateral displacement accurately at the lower stories. The results are almost independent of the number of stories. While, the inter-story drift angle (IDR) obtained from MMC method has the most appropriate accuracy among the other LPs. Although, the accuracy of this method decreases with increasing μ t so that with increasing number of stories, IDR is smaller or greater than the values resulted from NTHA depending on the position of captured results. In addition, increasing μ t decreases the accuracy of all LPs in determination of critical story position. In this case, the MMC method has the best coincidence with distribution of inter-story ductility over the height.
ScienceCast 94: Solar Max Double Peaked
2013-03-01
Something unexpected is happening on the sun. 2013 is supposed to be the year of Solar Max, but solar activity is much lower than expected. At least one leading forecaster expects the sun to rebound with a double-peaked maximum later this year.
NASA Technical Reports Server (NTRS)
Thompson, R. H.; Gambardella, P. J.
1980-01-01
The Solar Maximum Mission (SMM) spacecraft provides an excellent opportunity for evaluating attitude determination accuracies achievable with tracking instruments such as fixed head star trackers (FHSTs). As a part of its payload, SMM carries a highly accurate fine pointing Sun sensor (FPSS). The EPSS provides an independent check of the pitch and yaw parameters computed from observations of stars in the FHST field of view. A method to determine the alignment of the FHSTs relative to the FPSS using spacecraft data is applied. Two methods that were used to determine distortions in the 8 degree by 8 degree field of view of the FHSTs using spacecraft data are also presented. The attitude determination accuracy performance of the in flight calibrated FHSTs is evaluated.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
Effect of ethanol on psychomotor performance and on risk taking behaviour.
Farquhar, K; Lambert, K; Drummond, G B; Tiplady, B; Wright, P
2002-12-01
Ethanol may increase the willingness to take risks, but this issue remains controversial. We used a risk-taking paradigm in which volunteers answered a series of general knowledge questions with numerical answers and were asked to judge the length of a line that would just fit into a given gap. A maximum score was given for an exactly correct answer. For answers that were less than the correct value, the score was reduced gradually to zero, while answers even slightly over the correct value were penalized considerably. Total points were rewarded by cash payments, so volunteers were taking real risks when making their responses. Performance was assessed in a two-period, double-blind crossover study, comparing ethanol (0.7 g/kg) with placebo in 20 female volunteers aged 19-20 years. Tests were carried out before and at 45 min after dosing. Mean (SD) ethanol blood alcohol concentrations were 65 (10.5) mg/100 ml. Ethanol impaired the skill/ability measure of the length estimation test (SD of difference between length of line and gap), which increased from 5.9 to 6.6 (p < 0.05), indicating a reduced accuracy of estimation. The risk measures in both tasks were not significantly affected. The skill/ability measure in the general knowledge task was not significantly affected. Other performance tests showed that ethanol produced the expected impairment of both speed and accuracy. These results suggest that risk-taking is not increased by ethanol at doses approaching the UK legal limit for driving.
Ravichandran, Ramamoorthy; Binukumar, Jp
2011-01-01
International Basic Safety Standards (International Atomic Energy Agency, IAEA) provide guidance levels for diagnostic procedures in nuclear medicine indicating the maximum usual activity for various diagnostic tests in terms of activities of injected radioactive formulations. An accuracy of ± 10% in the activities of administered radio-pharmaceuticals is being recommended, for expected outcome in diagnostic and therapeutic nuclear medicine procedures. It is recommended that the long-term stability of isotope calibrators used in nuclear medicine is to be checked periodically for their performance using a long-lived check source, such as Cs-137, of suitable activity. In view of the un-availability of such a radioactive source, we tried to develop methods to maintain traceability of these instruments, for certifying measured activities for human use. Two re-entrant chambers [(HDR 1000 and Selectron Source Dosimetry System (SSDS)] with I-125 and Ir-192 calibration factors in the Department of Radiotherapy were used to measure Iodine-131 (I-131) therapy capsules to establish traceability to Mark V isotope calibrator of the Department of Nuclear Medicine. Special nylon jigs were fabricated to keep I-131 capsule holder in position. Measured activities in all the chambers showed good agreement. The accuracy of SSDS chamber in measuring Ir-192 activities in the last 5 years was within 0.5%, validating its role as departmental standard for measuring activity. The above method is adopted because mean energies of I-131 and Ir-192 are comparable.
Liang, Jinyang; Zhou, Yong; Maslov, Konstantin I.
2013-01-01
Abstract. A cross-correlation-based method is proposed to quantitatively measure transverse flow velocity using optical resolution photoacoustic (PA) microscopy enhanced with a digital micromirror device (DMD). The DMD is used to alternately deliver two spatially separated laser beams to the target. Through cross-correlation between the slow-time PA profiles measured from the two beams, the speed and direction of transverse flow are simultaneously derived from the magnitude and sign of the time shift, respectively. Transverse flows in the range of 0.50 to 6.84 mm/s are accurately measured using an aqueous suspension of 10-μm-diameter microspheres, and the root-mean-squared measurement accuracy is quantified to be 0.22 mm/s. The flow measurements are independent of the particle size for flows in the velocity range of 0.55 to 6.49 mm/s, which was demonstrated experimentally using three different sizes of microspheres (diameters: 3, 6, and 10 μm). The measured flow velocity follows an expected parabolic distribution along the depth direction perpendicular to the flow. Both maximum and minimum measurable velocities are investigated for varied distances between the two beams and varied total time for one measurement. This technique shows an accuracy of 0.35 mm/s at 0.3-mm depth in scattering chicken breast, making it promising for measuring flow in biological tissue. PMID:24002191
NASA Astrophysics Data System (ADS)
Saran, Sameer; Sterk, Geert; Kumar, Suresh
2009-10-01
Land use/land cover is an important watershed surface characteristic that affects surface runoff and erosion. Many of the available hydrological models divide the watershed into Hydrological Response Units (HRU), which are spatial units with expected similar hydrological behaviours. The division into HRU's requires good-quality spatial data on land use/land cover. This paper presents different approaches to attain an optimal land use/land cover map based on remote sensing imagery for a Himalayan watershed in northern India. First digital classifications using maximum likelihood classifier (MLC) and a decision tree classifier were applied. The results obtained from the decision tree were better and even improved after post classification sorting. But the obtained land use/land cover map was not sufficient for the delineation of HRUs, since the agricultural land use/land cover class did not discriminate between the two major crops in the area i.e. paddy and maize. Subsequently the digital classification on fused data (ASAR and ASTER) were attempted to map land use/land cover classes with emphasis to delineate the paddy and maize crops but the supervised classification over fused datasets did not provide the desired accuracy and proper delineation of paddy and maize crops. Eventually, we adopted a visual classification approach on fused data. This second step with detailed classification system resulted into better classification accuracy within the 'agricultural land' class which will be further combined with topography and soil type to derive HRU's for physically-based hydrological modeling.
ERIC Educational Resources Information Center
Atkinson, Michael L.; Allen, Vernon L.
This experiment was designed to investigate the generality-specificity of the accuracy of both encoders and decoders across different types of nonverbal behavior. It was expected that encoders and decoders would exhibit generality in their behavior--i.e., the same level of accuracy--on the dimension of behavior content…
Use of Fuzzycones for Sun-Only Attitude Determination: THEMIS Becomes ARTEMIS
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.; Felikson, Denis; Sedlak, Joseph E.
2009-01-01
In order for two THEMIS probes to successfully transition to ARTEMIS it will be necessary to determine attitudes with moderate accuracy using Sun sensor data only. To accomplish this requirement, an implementation of the Fuzzycones maximum likelihood algorithm was developed. The effect of different measurement uncertainty models on Fuzzycones attitude accuracy was investigated and a bin-transition technique was introduced to improve attitude accuracy using data with uniform error distributions. The algorithm was tested with THEMIS data and in simulations. The analysis results show that the attitude requirements can be met using Fuzzycones and data containing two bin-transitions.
Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.
Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru
2013-01-01
Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.
NASA Technical Reports Server (NTRS)
Gramenopoulos, N. (Principal Investigator)
1973-01-01
The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.
Verification of National Weather Service spot forecasts using surface observations
NASA Astrophysics Data System (ADS)
Lammers, Matthew Robert
Software has been developed to evaluate National Weather Service spot forecasts issued to support prescribed burns and early-stage wildfires. Fire management officials request spot forecasts from National Weather Service Weather Forecast Offices to provide detailed guidance as to atmospheric conditions in the vicinity of planned prescribed burns as well as wildfires that do not have incident meteorologists on site. This open source software with online display capabilities is used to examine an extensive set of spot forecasts of maximum temperature, minimum relative humidity, and maximum wind speed from April 2009 through November 2013 nationwide. The forecast values are compared to the closest available surface observations at stations installed primarily for fire weather and aviation applications. The accuracy of the spot forecasts is compared to those available from the National Digital Forecast Database (NDFD). Spot forecasts for selected prescribed burns and wildfires are used to illustrate issues associated with the verification procedures. Cumulative statistics for National Weather Service County Warning Areas and for the nation are presented. Basic error and accuracy metrics for all available spot forecasts and the entire nation indicate that the skill of the spot forecasts is higher than that available from the NDFD, with the greatest improvement for maximum temperature and the least improvement for maximum wind speed.
Trevino, Kelly M; Zhang, Baohui; Shen, Megan J; Prigerson, Holly G
2016-06-15
The objective of this study was to examine the source of advanced cancer patients' information about their prognosis and determine whether this source of information could explain racial disparities in the accuracy of patients' life expectancy estimates (LEEs). Coping With Cancer was a prospective, longitudinal, multisite study of terminally ill cancer patients followed until death. In structured interviews, patients reported their LEEs and the sources of these estimates (ie, medical providers, personal beliefs, religious beliefs, and other). The accuracy of LEEs was calculated through a comparison of patients' self-reported LEEs with their actual survival. The sample for this analysis included 229 patients: 31 black patients and 198 white patients. Only 39.30% of the patients estimated their life expectancy within 12 months of their actual survival. Black patients were more likely to have an inaccurate LEE than white patients. A minority of the sample (18.3%) reported that a medical provider was the source of their LEEs; none of the black patients (0%) based their LEEs on a medical provider. Black race remained a significant predictor of an inaccurate LEE, even after the analysis had been controlled for sociodemographic characteristics and the source of LEEs. The majority of advanced cancer patients have an inaccurate understanding of their life expectancy. Black patients with advanced cancer are more likely to have an inaccurate LEE than white patients. Medical providers are not the source of information for LEEs for most advanced cancer patients and especially for black patients. The source of LEEs does not explain racial differences in LEE accuracy. Additional research into the mechanisms underlying racial differences in prognostic understanding is needed. Cancer 2016;122:1905-12. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Quality assurance of dynamic parameters in volumetric modulated arc therapy
Manikandan, A; Sarkar, B; Holla, R; Vivek, T R; Sujatha, N
2012-01-01
Objectives The purpose of this study was to demonstrate quality assurance checks for accuracy of gantry speed and position, dose rate and multileaf collimator (MLC) speed and position for a volumetric modulated arc treatment (VMAT) modality (Synergy® S; Elekta, Stockholm, Sweden), and to check that all the necessary variables and parameters were synchronous. Methods Three tests (for gantry position–dose delivery synchronisation, gantry speed–dose delivery synchronisation and MLC leaf speed and positions) were performed. Results The average error in gantry position was 0.5° and the average difference was 3 MU for a linear and a parabolic relationship between gantry position and delivered dose. In the third part of this test (sawtooth variation), the maximum difference was 9.3 MU, with a gantry position difference of 1.2°. In the sweeping field method test, a linear relationship was observed between recorded doses and distance from the central axis, as expected. In the open field method, errors were encountered at the beginning and at the end of the delivery arc, termed the “beginning” and “end” errors. For MLC position verification, the maximum error was −2.46 mm and the mean error was 0.0153 ±0.4668 mm, and 3.4% of leaves analysed showed errors of >±1 mm. Conclusion This experiment demonstrates that the variables and parameters of the Synergy® S are synchronous and that the system is suitable for delivering VMAT using a dynamic MLC. PMID:22745206
Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.
2008-03-01
Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.
NASA Technical Reports Server (NTRS)
Scholz, D.; Fuhs, N.; Hixson, M.; Akiyama, T. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Data sets for corn, soybeans, winter wheat, and spring wheat were used to evaluate the following schemes for crop identification: (1) per point Gaussian maximum classifier; (2) per point sum of normal densities classifiers; (3) per point linear classifier; (4) per point Gaussian maximum likelihood decision tree classifiers; and (5) texture sensitive per field Gaussian maximum likelihood classifier. Test site location and classifier both had significant effects on classification accuracy of small grains; classifiers did not differ significantly in overall accuracy, with the majority of the difference among classifiers being attributed to training method rather than to the classification algorithm applied. The complexity of use and computer costs for the classifiers varied significantly. A linear classification rule which assigns each pixel to the class whose mean is closest in Euclidean distance was the easiest for the analyst and cost the least per classification.
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
46 CFR 151.50-13 - Propylene oxide.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Pressure vessel cargo tanks shall meet the requirements of Class II pressure vessels. (2) Cargo tanks shall be designed for the maximum pressure expected to be encountered during loading, storing and... cargo piping shall be subjected to a hydrostatic test of 11/2 times the maximum pressure to which they...
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Hardware accuracy counters for application precision and quality feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani
Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be outputmore » to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.« less
Evaluation of the PV energy production after 12-years of operating
NASA Astrophysics Data System (ADS)
Bouchakour, Salim; Arab, Amar Hadj; Abdeladim, Kamel; Boulahchiche, Saliha; Amrouche, Said Ould; Razagui, Abdelhak
2018-05-01
This paper presents a simple way to approximately evaluate the photovoltaic (PV) array performance degradation, the studied PV arrays are connected to the local electric grid at the Centre de Developpement des Energies Renouvelables (CDER) in Algiers, Algeria, since June 2004. The used PV module model takes in consideration the module temperature and the effective solar radiance, the electrical characteristics provided by the manufacturer data sheet and the evaluation of the performance coefficient. For the dynamic behavior we use the Linear Reoriented Coordinates Method (LRCM) to estimate the maximum power point (MPP). The performance coefficient is evaluated on the one hand under STC conditions to estimate the dc energy according to the manufacturer data. On the other hand, under real conditions using both the monitored data and the LM optimization algorithm, allowing a good degree of accuracy of estimated dc energy. The application of the developed modeling procedure to the analysis of the monitored data is expected to improve understanding and assessment of the PV performance degradation of the PV arrays after 12 years of operation.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
NASA Astrophysics Data System (ADS)
Duffy, M.; Richardson, T. J.; Craythorne, E.; Mallipeddi, R.; Coleman, A. J.
2014-02-01
A system has been developed to assess the feasibility of using motion tracking to enable pre-surgical margin mapping of basal cell carcinoma (BCC) in the clinic using optical coherence tomography (OCT). This system consists of a commercial OCT imaging system (the VivoSight 1500, MDL Ltd., Orpington, UK), which has been adapted to incorporate a webcam and a single-sensor electromagnetic positional tracking module (the Flock of Birds, Ascension Technology Corp, Vermont, USA). A supporting software interface has also been developed which allows positional data to be captured and projected onto a 2D dermoscopic image in real-time. Initial results using a stationary test phantom are encouraging, with maximum errors in the projected map in the order of 1-2mm. Initial clinical results were poor due to motion artefact, despite attempts to stabilise the patient. However, the authors present several suggested modifications that are expected to reduce the effects of motion artefact and improve the overall accuracy and clinical usability of the system.
Determining approximate age of digital images using sensor defects
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2011-02-01
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
X-ray bursts: Observation versus theory
NASA Technical Reports Server (NTRS)
Lewin, W. H. G.
1981-01-01
Results of various observations of common type I X-ray bursts are discussed with respect to the theory of thermonuclear flashes in the surface layers of accreting neutron stars. Topics covered include burst profiles; irregular burst intervals; rise and decay times and the role of hydrogen; the accuracy of source distances; accuracy in radii determination; radius increase early in the burst; the super Eddington limit; temperatures at burst maximum; and the role of the magnetic field.
Li, Shuying; Wang, Yunyan; Hu, Likuan; Liang, Yingchun; Cai, Jing
2014-11-01
The large errors of routine localization for eyeball tumors restricted X-ray radiosurgery application, just for the eyeball to turn around. To localize the accuracy site, the micro-vacuo-certo-contacting ophthalmophanto (MVCCOP) method was used. Also, the outcome of patients with tumors in the eyeball was evaluated. In this study, computed tomography (CT) localization accuracy was measured by repeating CT scan using MVCCOP to fix the eyeball in radiosurgery. This study evaluated the outcome of the tumors and the survival of the patients by follow-up. The results indicated that the accuracy of CT localization of Brown-Roberts-Wells (BRW) head ring was 0.65 mm and maximum error was 1.09 mm. The accuracy of target localization of tumors in the eyeball using MVCCOP was 0.87 mm averagely, and the maximum error was 1.19 mm. The errors of fixation of the eyeball were 0.84 mm averagely and 1.17 mm maximally. The total accuracy was 1.34 mm, and 95% confidence accuracy was 2.09 mm. The clinical application of this method in 14 tumor patients showed satisfactory results, and all of the tumors showed the clear rims. The site of ten retinoblastomas was decreased significantly. The local control interval of tumors were 6 ∼ 24 months, median of 10.5 months. The survival of ten patients was 7 ∼ 30 months, median of 16.5 months. Also, the tumors were kept stable or shrank in the other four patients with angioma and melanoma. In conclusion, the MVCCOP is suitable and dependable for X-ray radiosurgery for eyeball tumors. The tumor control and survival of patients are satisfactory, and this method can effectively postpone or avoid extirpation of eyeball.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Buck, Gregory M.; Decaro, Anthony D.
2009-01-01
The analysis of effects of the reaction control system jet plumes on aftbody heating of Orion entry capsule is presented. The analysis covered hypersonic continuum part of the entry trajectory. Aerothermal environments at flight conditions were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code and Data Parallel Line Relaxation (DPLR) algorithm code. Results show a marked augmentation of aftbody heating due to roll, yaw and aft pitch thrusters. No significant augmentation is expected due to forward pitch thrusters. Of the conditions surveyed the maximum heat rate on the aftshell is expected when firing a pair of roll thrusters at a maximum deceleration condition.
The Role of Expectancy in Eliciting Hypnotic Responses as a Function of Type of Induction.
ERIC Educational Resources Information Center
Kirsch, Irving; And Others
1984-01-01
Examined the relationship between expectancy and suggestibility in hypnosis as a function of type of induction (N=100). Results showed subjects were able to predict their responses to a cognitive skill induction with great accuracy but were not very accurate in predicting responses to a hypnotic trance induction. (JAC)
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
Overall evaluation of ERTS-1 imagery for cartographic application
NASA Technical Reports Server (NTRS)
Colvocoresses, A. P. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant scientific conclusions are: (1) Bulk RBV's have internal positional accuracy in the order of 70 meters at ground scale while MSS internal accuracy is in the order of 200 to 300 meters. Both have precision processed images with accuracy within 70 meters. (2) Image quality exhibited by detectability and acutance is better than expected and perhaps twice as good as would be achieved by photographic film of the same resolution. (3) Photometric anomalies (shading) have limited RBV multispectral application, but it is believed that these anomalies can be further reduced. (4) The MSS has exceptionally high photometric fidelity but the matching of scenes taken under different conditions of illumination has not been resolved. (5) MSS bands 6 and 7 have enormous potential for surface water mapping including the correlation of shorelines at various water stages. (6) MSS band 7 demonstrates an actual cloud penetration capability beyond what was expected. It also has delineated cultural features better than the other MSS bands under certain conditions.
200 kj copper foil fuses. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClenahan, C.R.; Goforth, J.H.; Degnan, J.H.
1980-04-01
A 200-kJ, 50-kV capacitor bank has been discharged into 1-mil-thick copper foils immersed in fine glass beads. These foils ranged in length from 27 to 71 cm and in width from 15 to 40 cm. Voltage spikes of over 250 kV were produced by the resulting fuse behavior of the foil. Moreover, the current turned off at a rate that was over 6 times the initial bank dI/dt. Full widths at half maxima for the voltage and dI/dt spikes were about 0.5 microsec, with some as short as 300 nanosec. Electrical breakdown was prevented in all but one size fuzemore » with maximum applied fields of 7 kV/cm. Fuses that were split into two parallel sections have been tested, and the effects relative to one-piece fuses are much larger than would be expected on the basis of inductance differences alone. A resistivity model for copper foil fuses, which differs from previous work in that it includes a current density dependence, has been devised. Fuse behavior is predicted with reasonable accuracy over a wide range of foil sizes by a quasi-two-dimensional fuse code that incorporates this resistivity model. A variation of Maisonnier's method for predicting optimum fuze size has been derived. This method is valid if the risetime of the bank exceeds 3 microsec, in which case it can be expected to be applicable over a wide range of peak current densities.« less
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
78 FR 76593 - Maximum Loan Amount for Business and Industry Guaranteed Loans in Fiscal Year 2014
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... DEPARTMENT OF AGRICULTURE Rural Business-Cooperative Service Maximum Loan Amount for Business and Industry Guaranteed Loans in Fiscal Year 2014 AGENCY: Rural Business-Cooperative Service, USDA. ACTION... limited program funds that are expected for Fiscal Year (FY) 2014 for the B&I Guaranteed Loan Program, the...
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Maximum saliency bias in binocular fusion
NASA Astrophysics Data System (ADS)
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
Dynamical resonance shift and unification of resonances in short-pulse laser-cluster interaction
NASA Astrophysics Data System (ADS)
Mahalik, S. S.; Kundu, M.
2018-06-01
Pronounced maximum absorption of laser light irradiating a rare-gas or metal cluster is widely expected during the linear resonance (LR) when Mie-plasma wavelength λM of electrons equals the laser wavelength λ . On the contrary, by performing molecular dynamics (MD) simulations of an argon cluster irradiated by short 5-fs (FWHM) laser pulses it is revealed that, for a given laser pulse energy and a cluster, at each peak intensity there exists a λ —shifted from the expected λM—that corresponds to a unified dynamical LR at which evolution of the cluster happens through very efficient unification of possible resonances in various stages, including (i) the LR in the initial time of plasma creation, (ii) the LR in the Coulomb expanding phase in the later time, and (iii) anharmonic resonance in the marginally overdense regime for a relatively longer pulse duration, leading to maximum laser absorption accompanied by maximum removal of electrons from cluster and also maximum allowed average charge states for the argon cluster. Increasing the laser intensity, the absorption maxima is found to shift to a higher wavelength in the band of λ ≈(1 -1.5 ) λM than permanently staying at the expected λM. A naive rigid sphere model also corroborates the wavelength shift of the absorption peak as found in MD and unequivocally proves that maximum laser absorption in a cluster happens at a shifted λ in the marginally overdense regime of λ ≈(1 -1.5 ) λM instead of λM of LR. The present study is important for guiding an optimal condition laser-cluster interaction experiment in the short-pulse regime.
Leaf Movements of Indoor Plants Monitored by Terrestrial LiDAR
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Gard, Wolfgang
2018-01-01
Plant leaf movement is induced by some combination of different external and internal stimuli. Detailed geometric characterization of such movement is expected to improve understanding of these mechanisms. A metric high-quality, non-invasive and innovative sensor system to analyze plant movement is Terrestrial LiDAR (TLiDAR). This technique has an active sensor and is, therefore, independent of light conditions, able to obtain accurate high spatial and temporal resolution point clouds. In this study, a movement parameterization approach of leaf plants based on TLiDAR is introduced. For this purpose, two Calathea roseopicta plants were scanned in an indoor environment during 2 full-days, 1 day in natural light conditions and the other in darkness. The methodology to estimate leaf movement is based on segmenting individual leaves using an octree-based 3D-grid and monitoring the changes in their orientation by Principal Component Analysis. Additionally, canopy variations of the plant as a whole were characterized by a convex-hull approach. As a result, 9 leaves in plant 1 and 11 leaves in plant 2 were automatically detected with a global accuracy of 93.57 and 87.34%, respectively, compared to a manual detection. Regarding plant 1, in natural light conditions, the displacement average of the leaves between 7.00 a.m. and 12.30 p.m. was 3.67 cm as estimated using so-called deviation maps. The maximum displacement was 7.92 cm. In addition, the orientation changes of each leaf within a day were analyzed. The maximum variation in the vertical angle was 69.6° from 12.30 to 6.00 p.m. In darkness, the displacements were smaller and showed a different orientation pattern. The canopy volume of plant 1 changed more in the morning (4.42 dm3) than in the afternoon (2.57 dm3). The results of plant 2 largely confirmed the results of the first plant and were added to check the robustness of the methodology. The results show how to quantify leaf orientation variation and leaf movements along a day at mm accuracy in different light conditions. This confirms the feasibility of the proposed methodology to robustly analyse leaf movements. PMID:29527217
Scenario-Based Validation of Moderate Resolution DEMs Freely Available for Complex Himalayan Terrain
NASA Astrophysics Data System (ADS)
Singh, Mritunjay Kumar; Gupta, R. D.; Snehmani; Bhardwaj, Anshuman; Ganju, Ashwagosha
2016-02-01
Accuracy of the Digital Elevation Model (DEM) affects the accuracy of various geoscience and environmental modelling results. This study evaluates accuracies of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM Version-2 (GDEM V2), the Shuttle Radar Topography Mission (SRTM) X-band DEM and the NRSC Cartosat-1 DEM V1 (CartoDEM). A high resolution (1 m) photogrammetric DEM (ADS80 DEM), having a high absolute accuracy [1.60 m linear error at 90 % confidence (LE90)], resampled at 30 m cell size was used as reference. The overall root mean square error (RMSE) in vertical accuracy was 23, 73, and 166 m and the LE90 was 36, 75, and 256 m for ASTER GDEM V2, SRTM X-band DEM and CartoDEM, respectively. A detailed error analysis was performed for individual as well as combinations of different classes of aspect, slope, land-cover and elevation zones for the study area. For the ASTER GDEM V2, forest areas with North facing slopes (0°-5°) in the 4th elevation zone (3773-4369 m) showed minimum LE90 of 0.99 m, and barren with East facing slopes (>60°) falling under the 2nd elevation zone (2581-3177 m) showed maximum LE90 of 166 m. For the SRTM DEM, pixels with South-East facing slopes of 0°-5° in the 4th elevation zone covered with forest showed least LE90 of 0.33 m and maximum LE90 of 521 m was observed in the barren area with North-East facing slope (>60°) in the 4th elevation zone. In case of the CartoDEM, the snow pixels in the 2nd elevation zone with South-East facing slopes of 5°-15° showed least LE90 of 0.71 m and maximum LE90 of 1266 m was observed for the snow pixels in the 3rd elevation zone (3177-3773 m) within the South facing slope of 45°-60°. These results can be highly useful for the researchers using DEM products in various modelling exercises.
Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.
Du, Xinxin; Tan, Kok Kiong
2016-05-01
Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.
Purcell, Braden A.; Kiani, Roozbeh
2016-01-01
Decision-making in a natural environment depends on a hierarchy of interacting decision processes. A high-level strategy guides ongoing choices, and the outcomes of those choices determine whether or not the strategy should change. When the right decision strategy is uncertain, as in most natural settings, feedback becomes ambiguous because negative outcomes may be due to limited information or bad strategy. Disambiguating the cause of feedback requires active inference and is key to updating the strategy. We hypothesize that the expected accuracy of a choice plays a crucial rule in this inference, and setting the strategy depends on integration of outcome and expectations across choices. We test this hypothesis with a task in which subjects report the net direction of random dot kinematograms with varying difficulty while the correct stimulus−response association undergoes invisible and unpredictable switches every few trials. We show that subjects treat negative feedback as evidence for a switch but weigh it with their expected accuracy. Subjects accumulate switch evidence (in units of log-likelihood ratio) across trials and update their response strategy when accumulated evidence reaches a bound. A computational framework based on these principles quantitatively explains all aspects of the behavior, providing a plausible neural mechanism for the implementation of hierarchical multiscale decision processes. We suggest that a similar neural computation—bounded accumulation of evidence—underlies both the choice and switches in the strategy that govern the choice, and that expected accuracy of a choice represents a key link between the levels of the decision-making hierarchy. PMID:27432960
Purcell, Braden A; Kiani, Roozbeh
2016-08-02
Decision-making in a natural environment depends on a hierarchy of interacting decision processes. A high-level strategy guides ongoing choices, and the outcomes of those choices determine whether or not the strategy should change. When the right decision strategy is uncertain, as in most natural settings, feedback becomes ambiguous because negative outcomes may be due to limited information or bad strategy. Disambiguating the cause of feedback requires active inference and is key to updating the strategy. We hypothesize that the expected accuracy of a choice plays a crucial rule in this inference, and setting the strategy depends on integration of outcome and expectations across choices. We test this hypothesis with a task in which subjects report the net direction of random dot kinematograms with varying difficulty while the correct stimulus-response association undergoes invisible and unpredictable switches every few trials. We show that subjects treat negative feedback as evidence for a switch but weigh it with their expected accuracy. Subjects accumulate switch evidence (in units of log-likelihood ratio) across trials and update their response strategy when accumulated evidence reaches a bound. A computational framework based on these principles quantitatively explains all aspects of the behavior, providing a plausible neural mechanism for the implementation of hierarchical multiscale decision processes. We suggest that a similar neural computation-bounded accumulation of evidence-underlies both the choice and switches in the strategy that govern the choice, and that expected accuracy of a choice represents a key link between the levels of the decision-making hierarchy.
S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions
NASA Technical Reports Server (NTRS)
Krishen, K.; Pounds, D. J.
1975-01-01
Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.
Hydrometeorological model for streamflow prediction
Tangborn, Wendell V.
1979-01-01
The hydrometeorological model described in this manual was developed to predict seasonal streamflow from water in storage in a basin using streamflow and precipitation data. The model, as described, applies specifically to the Skokomish, Nisqually, and Cowlitz Rivers, in Washington State, and more generally to streams in other regions that derive seasonal runoff from melting snow. Thus the techniques demonstrated for these three drainage basins can be used as a guide for applying this method to other streams. Input to the computer program consists of daily averages of gaged runoff of these streams, and daily values of precipitation collected at Longmire, Kid Valley, and Cushman Dam. Predictions are based on estimates of the absolute storage of water, predominately as snow: storage is approximately equal to basin precipitation less observed runoff. A pre-forecast test season is used to revise the storage estimate and improve the prediction accuracy. To obtain maximum prediction accuracy for operational applications with this model , a systematic evaluation of several hydrologic and meteorologic variables is first necessary. Six input options to the computer program that control prediction accuracy are developed and demonstrated. Predictions of streamflow can be made at any time and for any length of season, although accuracy is usually poor for early-season predictions (before December 1) or for short seasons (less than 15 days). The coefficient of prediction (CP), the chief measure of accuracy used in this manual, approaches zero during the late autumn and early winter seasons and reaches a maximum of about 0.85 during the spring snowmelt season. (Kosco-USGS)
NASA Astrophysics Data System (ADS)
Karakacan Kuzucu, A.; Bektas Balcik, F.
2017-11-01
Accurate and reliable land use/land cover (LULC) information obtained by remote sensing technology is necessary in many applications such as environmental monitoring, agricultural management, urban planning, hydrological applications, soil management, vegetation condition study and suitability analysis. But this information still remains a challenge especially in heterogeneous landscapes covering urban and rural areas due to spectrally similar LULC features. In parallel with technological developments, supplementary data such as satellite-derived spectral indices have begun to be used as additional bands in classification to produce data with high accuracy. The aim of this research is to test the potential of spectral vegetation indices combination with supervised classification methods and to extract reliable LULC information from SPOT 7 multispectral imagery. The Normalized Difference Vegetation Index (NDVI), the Ratio Vegetation Index (RATIO), the Soil Adjusted Vegetation Index (SAVI) were the three vegetation indices used in this study. The classical maximum likelihood classifier (MLC) and support vector machine (SVM) algorithm were applied to classify SPOT 7 image. Catalca is selected region located in the north west of the Istanbul in Turkey, which has complex landscape covering artificial surface, forest and natural area, agricultural field, quarry/mining area, pasture/scrubland and water body. Accuracy assessment of all classified images was performed through overall accuracy and kappa coefficient. The results indicated that the incorporation of these three different vegetation indices decrease the classification accuracy for the MLC and SVM classification. In addition, the maximum likelihood classification slightly outperformed the support vector machine classification approach in both overall accuracy and kappa statistics.
Microarcsecond Astrometry As A Probe Of Circumstellar Structure
NASA Astrophysics Data System (ADS)
Velusamy, T.; Turyshev, S. G.
1999-12-01
The Space Interferometry Mission (SIM) is a space-based long-baseline optical interferometer for precision astrometry. This mission will open up many areas of astrophysics, via astrometry with unprecedented accuracy. Wide-angle measurements, which include annual parallax, will reach a design accuracy of 4 μ as. Over a narrow field of view the relative accuracy is better, and SIM is expected to achieve an accuracy of 1 μ as. In this mode, SIM will search for planetary companions to nearby stars, by detecting the astrometric `wobble' relative to a nearby (<= 1o) reference star. The expected proper motion accuracy is 2 μ as yr-1, corresponding to a transverse velocity of 10 m s-1 at a distance of 1 kpc. Such an accuracy of the future SIM instrument provides a very useful astrometric tool for probing the circumstellar structure. The motion of the photo center as detected by SIM is not necessarily that of the center of mass. It is expected that unmodelled dynamics of the stellar systems may be a potential source for systematic astrometric errors. In this paper we discuss the possibility of using SIM's precision astrometry not only to detect Keplerian signatures due to the planetary motion around nearby stars, but also to characterize the structure of the planetary and proto-planetary orbits, accretions disks, debris disks, circumstellar material, jets and other types of the mass transfer mechanisms. We evaluate possible astrometric signatures due to different types of dynamical processes (both gravitational, non-gravitational) and characterize the magnitude of the corresponding astrometric signal. We attempt to address the most natural scenario of non-Keplerian motion, caused by an extended structure and complex dynamics of the stellar systems that may produce a detectable wobble in the motion of the optical center of a target star. We examine the use of μ as astrometry, as complementary to high resolution imaging, to detect some of the structures present around stars. This work was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Computer simulation and discussion of high-accuracy laser direction finding in real time
NASA Astrophysics Data System (ADS)
Chen, Wenyi; Chen, Yongzhi
1997-12-01
On condition that CCD is used as the sensor, there are at least five methods that can be used to realize laser's direction finding with high accuracy. They are: image matching method, radiation center method, geometric center method, center of rectangle envelope method and center of maximum run length method. The first three can get the highest accuracy but working in real-time it is too complicated to realize and the cost is very expansive. The other two can also get high accuracy, and it is not difficult to realize working in real time. By using a single-chip microcomputer and an ordinary CCD camera a very simple system can get the position information of a laser beam. The data rate is 50 times per second.
Bayesian view of single-qubit clocks, and an energy versus accuracy tradeoff
NASA Astrophysics Data System (ADS)
Gopalkrishnan, Manoj; Kandula, Varshith; Sriram, Praveen; Deshpande, Abhishek; Muralidharan, Bhaskaran
2017-09-01
We bring a Bayesian approach to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze how well one can keep time with a single qubit freely precessing under a magnetic field. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics, which does. We find the design of the single-qubit clock that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff—the energy cost is at least kBT times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution. We propose a physical realization of the single-qubit clock using charge transport across a capacitively coupled quantum dot.
Integration deficiencies associated with continuous limb movement sequences in Parkinson's disease.
Park, Jin-Hoon; Stelmach, George E
2009-11-01
The present study examined the extent to which Parkinson's disease (PD) influences integration of continuous limb movement sequences. Eight patients with idiopathic PD and 8 age-matched normal subjects were instructed to perform repetitive sequential aiming movements to specified targets under three-accuracy constraints: 1) low accuracy (W = 7 cm) - minimal accuracy constraint, 2) high accuracy (W = 0.64 cm) - maximum accuracy constraint, and 3) mixed accuracy constraint - one target of high accuracy and another target of low accuracy. The characteristic of sequential movements in the low accuracy condition was mostly cyclical, whereas in the high accuracy condition it was discrete in both groups. When the accuracy constraint was mixed, the sequential movements were executed by assembling discrete and cyclical movements in both groups, suggesting that for PD patients the capability to combine discrete and cyclical movements to meet a task requirement appears to be intact. However, such functional linkage was not as pronounced as was in normal subjects. Close examination of movement from the mixed accuracy condition revealed marked movement hesitations in the vicinity of the large target in PD patients, resulting in a bias toward discrete movement. These results suggest that PD patients may have deficits in ongoing planning and organizing processes during movement execution when the tasks require to assemble various accuracy requirements into more complex movement sequences.
Burghelea, Manuela; Verellen, Dirk; Poels, Kenneth; Gevaert, Thierry; Depuydt, Tom; Tournel, Koen; Hung, Cecilia; Simon, Viorica; Hiraoka, Masahiro; de Ridder, Mark
2015-07-15
The purpose of this study was to define an independent verification method based on on-board orthogonal fluoroscopy to determine the geometric accuracy of synchronized gantry-ring (G/R) rotations during dynamic wave arc (DWA) delivery available on the Vero system. A verification method for DWA was developed to calculate O-ring-gantry (G/R) positional information from ball-bearing positions retrieved from fluoroscopic images of a cubic phantom acquired during DWA delivery. Different noncoplanar trajectories were generated in order to investigate the influence of path complexity on delivery accuracy. The G/R positions detected from the fluoroscopy images (DetPositions) were benchmarked against the G/R angulations retrieved from the control points (CP) of the DWA RT plan and the DWA log files recorded by the treatment console during DWA delivery (LogActed). The G/R rotational accuracy was quantified as the mean absolute deviation ± standard deviation. The maximum G/R absolute deviation was calculated as the maximum 3-dimensional distance between the CP and the closest DetPositions. In the CP versus DetPositions comparison, an overall mean G/R deviation of 0.13°/0.16° ± 0.16°/0.16° was obtained, with a maximum G/R deviation of 0.6°/0.2°. For the LogActed versus DetPositions evaluation, the overall mean deviation was 0.08°/0.15° ± 0.10°/0.10° with a maximum G/R of 0.3°/0.4°. The largest decoupled deviations registered for gantry and ring were 0.6° and 0.4° respectively. No directional dependence was observed between clockwise and counterclockwise rotations. Doubling the dose resulted in a double number of detected points around each CP, and an angular deviation reduction in all cases. An independent geometric quality assurance approach was developed for DWA delivery verification and was successfully applied on diverse trajectories. Results showed that the Vero system is capable of following complex G/R trajectories with maximum deviations during DWA below 0.6°. Copyright © 2015 Elsevier Inc. All rights reserved.
Accuracy of a class of concurrent algorithms for transient finite element analysis
NASA Technical Reports Server (NTRS)
Ortiz, Michael; Sotelino, Elisa D.; Nour-Omid, Bahram
1988-01-01
The accuracy of a new class of concurrent procedures for transient finite element analysis is examined. A phase error analysis is carried out which shows that wave retardation leading to unacceptable loss of accuracy may occur if a Courant condition based on the dimensions of the subdomains is violated. Numerical tests suggest that this Courant condition is conservative for typical structural applications and may lead to a marked increase in accuracy as the number of subdomains is increased. Theoretical speed-up ratios are derived which suggest that the algorithms under consideration can be expected to exhibit a performance superior to that of globally implicit methods when implemented on parallel machines.
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
ERIC Educational Resources Information Center
Brown, Gregory K.; Henriques, Gregg R.; Sosdjan, Daniella; Beck, Aaron T.
2004-01-01
The degree of intent to commit suicide and the severity of self-injury were examined in individuals (N = 180) who had recently attempted suicide. Although a minimal association was found between the degree of suicide intent and the degree of lethality of the attempt, the accuracy of expectations about the likelihood of dying was found to moderate…
Decision time and confidence predict choosers' identification performance in photographic showups
Sagana, Anna; Sporer, Siegfried L.; Wixted, John T.
2018-01-01
In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence. PMID:29346394
Decision time and confidence predict choosers' identification performance in photographic showups.
Sauerland, Melanie; Sagana, Anna; Sporer, Siegfried L; Wixted, John T
2018-01-01
In vast contrast to the multitude of lineup studies that report on the link between decision time, confidence, and identification accuracy, only a few studies looked at these associations for showups, with results varying widely across studies. We therefore set out to test the individual and combined value of decision time and post-decision confidence for diagnosing the accuracy of positive showup decisions using confidence-accuracy characteristic curves and Bayesian analyses. Three-hundred-eighty-four participants viewed a stimulus event and were subsequently presented with two showups which could be target-present or target-absent. As expected, we found a negative decision time-accuracy and a positive post-decision confidence-accuracy correlation for showup selections. Confidence-accuracy characteristic curves demonstrated the expected additive effect of combining both postdictors. Likewise, Bayesian analyses, taking into account all possible target-presence base rate values showed that fast and confident identification decisions were more diagnostic than slow or less confident decisions, with the combination of both being most diagnostic for postdicting accurate and inaccurate decisions. The postdictive value of decision time and post-decision confidence was higher when the prior probability that the suspect is the perpetrator was high compared to when the prior probability that the suspect is the perpetrator was low. The frequent use of showups in practice emphasizes the importance of these findings for court proceedings. Overall, these findings support the idea that courts should have most trust in showup identifications that were made fast and confidently, and least in showup identifications that were made slowly and with low confidence.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Adaptive Quadrature for Item Response Models. Research Report. ETS RR-06-29
ERIC Educational Resources Information Center
Haberman, Shelby J.
2006-01-01
Adaptive quadrature is applied to marginal maximum likelihood estimation for item response models with normal ability distributions. Even in one dimension, significant gains in speed and accuracy of computation may be achieved.
Wire-positioning algorithm for coreless Hall array sensors in current measurement
NASA Astrophysics Data System (ADS)
Chen, Wenli; Zhang, Huaiqing; Chen, Lin; Gu, Shanyun
2018-05-01
This paper presents a scheme of circular-arrayed, coreless Hall-effect current transformers. It can satisfy the demands of wide dynamic range and bandwidth current in the distribution system, as well as the demand of AC and DC simultaneous measurements. In order to improve the signal to noise ratio (SNR) of the sensor, a wire-positioning algorithm is proposed, which can improve the measurement accuracy based on the post-processing of measurement data. The simulation results demonstrate that the maximum errors are 70%, 6.1% and 0.95% corresponding to Ampère’s circuital method, approximate positioning algorithm and precise positioning algorithm, respectively. It is obvious that the accuracy of the positioning algorithm is significantly improved when compared with that of the Ampère’s circuital method. The maximum error of the positioning algorithm is smaller in the experiment.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Dithering Digital Ripple Correlation Control for Photovoltaic Maximum Power Point Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barth, C; Pilawa-Podgurski, RCN
This study demonstrates a new method for rapid and precise maximum power point tracking in photovoltaic (PV) applications using dithered PWM control. Constraints imposed by efficiency, cost, and component size limit the available PWM resolution of a power converter, and may in turn limit the MPP tracking efficiency of the PV system. In these scenarios, PWM dithering can be used to improve average PWM resolution. In this study, we present a control technique that uses ripple correlation control (RCC) on the dithering ripple, thereby achieving simultaneous fast tracking speed and high tracking accuracy. Moreover, the proposed method solves some ofmore » the practical challenges that have to date limited the effectiveness of RCC in solar PV applications. We present a theoretical derivation of the principles behind dithering digital ripple correlation control, as well as experimental results that show excellent tracking speed and accuracy with basic hardware requirements.« less
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
Does body mass index misclassify physically active young men.
Grier, Tyson; Canham-Chervak, Michelle; Sharp, Marilyn; Jones, Bruce H
2015-01-01
The purpose of this analysis was to determine the accuracy of age and gender adjusted BMI as a measure of body fat (BF) in U.S. Army Soldiers. BMI was calculated through measured height and weight (kg/m(2)) and body composition was determined by dual energy X-ray absorptiometry (DEXA). Linear regression was used to determine a BF prediction equation and examine the correlation between %BF and BMI. The sensitivity and specificity of BMI compared to %BF as measured by DEXA was calculated. Soldiers (n = 110) were on average 23 years old, with a BMI of 26.4, and approximately 18% BF. The correlation between BMI and %BF (R = 0.86) was strong (p < 0.01). A sensitivity of 77% and specificity of 100% were calculated when using Army age adjusted BMI thresholds. The overall accuracy in determining if a Soldier met Army BMI standards and were within the maximum allowable BF or exceeded BMI standards and were over the maximum allowable BF was 83%. Using adjusted BMI thresholds in populations where physical fitness and training are requirements of the job provides better accuracy in identifying those who are overweight or obese due to high BF.
Fixed-head star tracker magnitude calibration on the solar maximum mission
NASA Technical Reports Server (NTRS)
Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.
1990-01-01
The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafata, K; Ren, L; Wu, Q
Purpose: To develop a data-mining methodology based on quantum clustering and machine learning to predict expected dosimetric endpoints for lung SBRT applications based on patient-specific anatomic features. Methods: Ninety-three patients who received lung SBRT at our clinic from 2011–2013 were retrospectively identified. Planning information was acquired for each patient, from which various features were extracted using in-house semi-automatic software. Anatomic features included tumor-to-OAR distances, tumor location, total-lung-volume, GTV and ITV. Dosimetric endpoints were adopted from RTOG-0195 recommendations, and consisted of various OAR-specific partial-volume doses and maximum point-doses. First, PCA analysis and unsupervised quantum-clustering was used to explore the feature-space tomore » identify potentially strong classifiers. Secondly, a multi-class logistic regression algorithm was developed and trained to predict dose-volume endpoints based on patient-specific anatomic features. Classes were defined by discretizing the dose-volume data, and the feature-space was zero-mean normalized. Fitting parameters were determined by minimizing a regularized cost function, and optimization was performed via gradient descent. As a pilot study, the model was tested on two esophageal dosimetric planning endpoints (maximum point-dose, dose-to-5cc), and its generalizability was evaluated with leave-one-out cross-validation. Results: Quantum-Clustering demonstrated a strong separation of feature-space at 15Gy across the first-and-second Principle Components of the data when the dosimetric endpoints were retrospectively identified. Maximum point dose prediction to the esophagus demonstrated a cross-validation accuracy of 87%, and the maximum dose to 5cc demonstrated a respective value of 79%. The largest optimized weighting factor was placed on GTV-to-esophagus distance (a factor of 10 greater than the second largest weighting factor), indicating an intuitively strong correlation between this feature and both endpoints. Conclusion: This pilot study shows that it is feasible to predict dose-volume endpoints based on patient-specific anatomic features. The developed methodology can potentially help to identify patients at risk for higher OAR doses, thus improving the efficiency of treatment planning. R01-184173.« less
Spatial statistical network models for stream and river temperature in New England, USA
NASA Astrophysics Data System (ADS)
Detenbeck, Naomi E.; Morrison, Alisa C.; Abele, Ralph W.; Kopp, Darin A.
2016-08-01
Watershed managers are challenged by the need for predictive temperature models with sufficient accuracy and geographic breadth for practical use. We described thermal regimes of New England rivers and streams based on a reduced set of metrics for the May-September growing season (July or August median temperature, diurnal rate of change, and magnitude and timing of growing season maximum) chosen through principal component analysis of 78 candidate metrics. We then developed and assessed spatial statistical models for each of these metrics, incorporating spatial autocorrelation based on both distance along the flow network and Euclidean distance between points. Calculation of spatial autocorrelation based on travel or retention time in place of network distance yielded tighter-fitting Torgegrams with less scatter but did not improve overall model prediction accuracy. We predicted monthly median July or August stream temperatures as a function of median air temperature, estimated urban heat island effect, shaded solar radiation, main channel slope, watershed storage (percent lake and wetland area), percent coarse-grained surficial deposits, and presence or maximum depth of a lake immediately upstream, with an overall root-mean-square prediction error of 1.4 and 1.5°C, respectively. Growing season maximum water temperature varied as a function of air temperature, local channel slope, shaded August solar radiation, imperviousness, and watershed storage. Predictive models for July or August daily range, maximum daily rate of change, and timing of growing season maximum were statistically significant but explained a much lower proportion of variance than the above models (5-14% of total).
Estimation of genomic breeding values for residual feed intake in a multibreed cattle population.
Khansefid, M; Pryce, J E; Bolormaa, S; Miller, S P; Wang, Z; Li, C; Goddard, M E
2014-08-01
Residual feed intake (RFI) is a measure of the efficiency of animals in feed utilization. The accuracies of GEBV for RFI could be improved by increasing the size of the reference population. Combining RFI records of different breeds is a way to do that. The aims of this study were to 1) develop a method for calculating GEBV in a multibreed population and 2) improve the accuracies of GEBV by using SNP associated with RFI. An alternative method for calculating accuracies of GEBV using genomic BLUP (GBLUP) equations is also described and compared to cross-validation tests. The dataset included RFI records and 606,096 SNP genotypes for 5,614 Bos taurus animals including 842 Holstein heifers and 2,009 Australian and 2,763 Canadian beef cattle. A range of models were tested for combining genotype and phenotype information from different breeds and the best model included an overall effect of each SNP, an effect of each SNP specific to a breed, and a small residual polygenic effect defined by the pedigree. In this model, the Holsteins and some Angus cattle were combined into 1 "breed class" because they were the only cattle measured for RFI at an early age (6-9 mo of age) and were fed a similar diet. The average empirical accuracy (0.31), estimated by calculating the correlation between GEBV and actual phenotypes divided by the square root of estimated heritability in 5-fold cross-validation tests, was near to that expected using the GBLUP equations (0.34). The average empirical and expected accuracies were 0.30 and 0.31, respectively, when the GEBV were estimated for each breed separately. Therefore, the across-breed reference population increased the accuracy of GEBV slightly, although the gain was greater for breeds with smaller number of individuals in the reference population (0.08 in Murray Grey and 0.11 in Hereford for empirical accuracy). In a second approach, SNP that were significantly (P < 0.001) associated with RFI in the beef cattle genomewide association studies were used to create an auxiliary genomic relationship matrix for estimating GEBV in Holstein heifers. The empirical (and expected) accuracy of GEBV within Holsteins increased from 0.33 (0.35) to 0.39 (0.36) and improved even more to 0.43 (0.50) when using a multibreed reference population. Therefore, a multibreed reference population is a useful resource to find SNP with a greater than average association with RFI in 1 breed and use them to estimate GEBV in another breed.
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Performance Improvement of Raman Distributed Temperature System by Using Noise Suppression
NASA Astrophysics Data System (ADS)
Li, Jian; Li, Yunting; Zhang, Mingjiang; Liu, Yi; Zhang, Jianzhong; Yan, Baoqiang; Wang, Dong; Jin, Baoquan
2018-06-01
In Raman distributed temperature system, the key factor for performance improvement is noise suppression, which seriously affects the sensing distance and temperature accuracy. Therefore, we propose and experimentally demonstrate dynamic noise difference algorithm and wavelet transform modulus maximum (WTMM) to de-noising Raman anti-Stokes signal. Experimental results show that the sensing distance can increase from 3 km to 11.5 km and the temperature accuracy increases to 1.58 °C at the sensing distance of 10.4 km.
Zhang, Hang; Wu, Shih-Wei; Maloney, Laurence T.
2010-01-01
S.-W. Wu, M. F. Dal Martello, and L. T. Maloney (2009) evaluated subjects' performance in a visuo-motor task where subjects were asked to hit two targets in sequence within a fixed time limit. Hitting targets earned rewards and Wu et al. varied rewards associated with targets. They found that subjects failed to maximize expected gain; they failed to invest more time in the movement to the more valuable target. What could explain this lack of response to reward? We first considered the possibility that subjects require training in allocating time between two movements. In Experiment 1, we found that, after extensive training, subjects still failed: They did not vary time allocation with changes in payoff. However, their actual gains equaled or exceeded the expected gain of an ideal time allocator, indicating that constraining time itself has a cost for motor accuracy. In a second experiment, we found that movements made under externally imposed time limits were less accurate than movements made with the same timing freely selected by the mover. Constrained time allocation cost about 17% in expected gain. These results suggest that there is no single speed–accuracy tradeoff for movement in our task and that subjects pursued different motor strategies with distinct speed–accuracy tradeoffs in different conditions. PMID:20884550
Hubble Space Telescope: Cycle 1 calibration plan. Version 1.0
NASA Technical Reports Server (NTRS)
Stanley, Peggy (Editor); Blades, Chris (Editor)
1990-01-01
The framework for quantitative scientific analysis with the Hubble Space Telescope (HST) will be established from a detailed calibration program, and a major responsibility of staff in the Telescope & Instruments Branch (TIB) is the development and maintenance of this calibration for the instruments and the telescope. The first in-orbit calibration will be performed by the SI Investigation Definition Teams (IDTs) during the Science Verification (SV) period in Cycle 0 (expected to start 3 months after launch and last for 5 months). Subsequently, instrument scientists in the TIB become responsible for all aspects of the calibration program. Because of the long lead times involved, TIB scientists have already formulated a calibration plan for the next observing period, Cycle 1 (expected to last a year after the end of SV), which has been reviewed and approved by the STScI Director. The purpose here is to describe the contents of this plan. Our primary aim has been to maintain through Cycle 1 the level of calibration that is anticipated by the end of SV. Anticipated accuracies are given here in tabular form - of course, these accuracies can only be best guesses because we do not know how each instrument will actually perform on-orbit. The calibration accuracies are expected to satisfy the normal needs of both the General Observers (GOs) and the Guaranteed Time Observers (GTOs).
Sweany, M.; Marleau, P.
2016-07-08
In this paper, we present the design and expected performance of a proof-of-concept 32 channel material identification system. Our system is based on the energy-dependent attenuation of fast neutrons for four elements: hydrogen, carbon, nitrogen and oxygen. We describe a new approach to obtaining a broad range of neutron energies to probe a sample, as well as our technique for reconstructing the molar densities within a sample. The system's performance as a function of time-of-flight energy resolution is explored using a Geant4-based Monte Carlo. Our results indicate that, with the expected detector response of our system, we will be ablemore » to determine the molar density of all four elements to within a 20–30% accuracy in a two hour scan time. In many cases this error is systematically low, thus the ratio between elements is more accurate. This degree of accuracy is enough to distinguish, for example, a sample of water from a sample of pure hydrogen peroxide: the ratio of oxygen to hydrogen is reconstructed to within 8±0.5% of the true value. Lastly, with future algorithm development that accounts for backgrounds caused by scattering within the sample itself, the accuracy of molar densities, not ratios, may improve to the 5–10% level for a two hour scan time.« less
Optimal combination of illusory and luminance-defined 3-D surfaces: A role for ambiguity.
Hartle, Brittney; Wilcox, Laurie M; Murray, Richard F
2018-04-01
The shape of the illusory surface in stereoscopic Kanizsa figures is determined by the interpolation of depth from the luminance edges of adjacent inducing elements. Despite ambiguity in the position of illusory boundaries, observers reliably perceive a coherent three-dimensional (3-D) surface. However, this ambiguity may contribute additional uncertainty to the depth percept beyond what is expected from measurement noise alone. We evaluated the intrinsic ambiguity of illusory boundaries by using a cue-combination paradigm to measure the reliability of depth percepts elicited by stereoscopic illusory surfaces. We assessed the accuracy and precision of depth percepts using 3-D Kanizsa figures relative to luminance-defined surfaces. The location of the surface peak was defined by illusory boundaries, luminance-defined edges, or both. Accuracy and precision were assessed using a depth-discrimination paradigm. A maximum likelihood linear cue combination model was used to evaluate the relative contribution of illusory and luminance-defined signals to the perceived depth of the combined surface. Our analysis showed that the standard deviation of depth estimates was consistent with an optimal cue combination model, but the points of subjective equality indicated that observers consistently underweighted the contribution of illusory boundaries. This systematic underweighting may reflect a combination rule that attributes additional intrinsic ambiguity to the location of the illusory boundary. Although previous studies show that illusory and luminance-defined contours share many perceptual similarities, our model suggests that ambiguity plays a larger role in the perceptual representation of illusory contours than of luminance-defined contours.
Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro
2016-08-15
Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, A; Farrell, T; Diamond, K
2014-08-15
Automated atlas-based segmentation has recently been evaluated for use in planning prostate cancer radiotherapy. In the typical approach, the essential step is the selection of an atlas from a database that best matches the target image. This work proposes an atlas selection strategy and evaluates its impact on the final segmentation accuracy. Prostate length (PL), right femoral head diameter (RFHD), and left femoral head diameter (LFHD) were measured in CT images of 20 patients. Each subject was then taken as the target image to which all remaining 19 images were affinely registered. For each pair of registered images, the overlapmore » between prostate and femoral head contours was quantified using the Dice Similarity Coefficient (DSC). Finally, we designed an atlas selection strategy that computed the ratio of PL (prostate segmentation), RFHD (right femur segmentation), and LFHD (left femur segmentation) between the target subject and each subject in the atlas database. Five atlas subjects yielding ratios nearest to one were then selected for further analysis. RFHD and LFHD were excellent parameters for atlas selection, achieving a mean femoral head DSC of 0.82 ± 0.06. PL had a moderate ability to select the most similar prostate, with a mean DSC of 0.63 ± 0.18. The DSC obtained with the proposed selection method were slightly lower than the maximums established using brute force, but this does not include potential improvements expected with deformable registration. Atlas selection based on PL for prostate and femoral diameter for femoral heads provides reasonable segmentation accuracy.« less
Tagaste, Barbara; Riboldi, Marco; Spadea, Maria F; Bellante, Simone; Baroni, Guido; Cambria, Raffaella; Garibaldi, Cristina; Ciocca, Mario; Catalano, Gianpiero; Alterio, Daniela; Orecchia, Roberto
2012-04-01
To compare infrared (IR) optical vs. stereoscopic X-ray technologies for patient setup in image-guided stereotactic radiotherapy. Retrospective data analysis of 233 fractions in 127 patients treated with hypofractionated stereotactic radiotherapy was performed. Patient setup at the linear accelerator was carried out by means of combined IR optical localization and stereoscopic X-ray image fusion in 6 degrees of freedom (6D). Data were analyzed to evaluate the geometric and dosimetric discrepancy between the two patient setup strategies. Differences between IR optical localization and 6D X-ray image fusion parameters were on average within the expected localization accuracy, as limited by CT image resolution (3 mm). A disagreement between the two systems below 1 mm in all directions was measured in patients treated for cranial tumors. In extracranial sites, larger discrepancies and higher variability were observed as a function of the initial patient alignment. The compensation of IR-detected rotational errors resulted in a significantly improved agreement with 6D X-ray image fusion. On the basis of the bony anatomy registrations, the measured differences were found not to be sensitive to patient breathing. The related dosimetric analysis showed that IR-based patient setup caused limited variations in three cases, with 7% maximum dose reduction in the clinical target volume and no dose increase in organs at risk. In conclusion, patient setup driven by IR external surrogates localization in 6D featured comparable accuracy with respect to procedures based on stereoscopic X-ray imaging. Copyright © 2012 Elsevier Inc. All rights reserved.
Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-08
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.
Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity
Manly, Bryan F.J.; Schmutz, Joel A.
2001-01-01
The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.
Genotype imputation in a coalescent model with infinitely-many-sites mutation
Huang, Lucy; Buzbas, Erkan O.; Rosenberg, Noah A.
2012-01-01
Empirical studies have identified population-genetic factors as important determinants of the properties of genotype-imputation accuracy in imputation-based disease association studies. Here, we develop a simple coalescent model of three sequences that we use to explore the theoretical basis for the influence of these factors on genotype-imputation accuracy, under the assumption of infinitely-many-sites mutation. Employing a demographic model in which two populations diverged at a given time in the past, we derive the approximate expectation and variance of imputation accuracy in a study sequence sampled from one of the two populations, choosing between two reference sequences, one sampled from the same population as the study sequence and the other sampled from the other population. We show that under this model, imputation accuracy—as measured by the proportion of polymorphic sites that are imputed correctly in the study sequence—increases in expectation with the mutation rate, the proportion of the markers in a chromosomal region that are genotyped, and the time to divergence between the study and reference populations. Each of these effects derives largely from an increase in information available for determining the reference sequence that is genetically most similar to the sequence targeted for imputation. We analyze as a function of divergence time the expected gain in imputation accuracy in the target using a reference sequence from the same population as the target rather than from the other population. Together with a growing body of empirical investigations of genotype imputation in diverse human populations, our modeling framework lays a foundation for extending imputation techniques to novel populations that have not yet been extensively examined. PMID:23079542
Calorimetry of low mass Pu239 items
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cremers, Teresa L; Sampson, Thomas E
2010-01-01
Calorimetric assay has the reputation of providing the highest precision and accuracy of all nondestructive assay measurements. Unfortunately, non-destructive assay practitioners and measurement consumers often extend, inappropriately, the high precision and accuracy of calorimetric assay to very low mass items. One purpose of this document is to present more realistic expectations for the random uncertainties associated with calorimetric assay for weapons grade plutonium items with masses of 200 grams or less.
Sharp, L; Black, R J; Harkness, E F; McKinney, P A
1996-01-01
OBJECTIVES: The primary aims were to investigate the incidence of leukaemia and non-Hodgkin's lymphoma in children resident near seven nuclear sites in Scotland and to determine whether there was any evidence of a gradient in risk with distance of residence from a nuclear site. A secondary aim was to assess the power of statistical tests for increased risk of disease near a point source when applied in the context of census data for Scotland. METHODS: The study data set comprised 1287 cases of leukaemia and non-Hodgkin's lymphoma diagnosed in children aged under 15 years in the period 1968-93, validated for accuracy and completeness. A study zone around each nuclear site was constructed from enumeration districts within 25 km. Expected numbers were calculated, adjusting for sex, age, and indices of deprivation and urban-rural residence. Six statistical tests were evaluated. Stone's maximum likelihood ratio (unconditional application) was applied as the main test for general increased incidence across a study zone. The linear risk score based on enumeration districts (conditional application) was used as a secondary test for declining risk with distance from each site. RESULTS: More cases were observed (O) than expected (E) in the study zones around Rosyth naval base (O/E 1.02), Chapelcross electricity generating station (O/E 1.08), and Dounreay reprocessing plant (O/E 1.99). The maximum likelihood ratio test reached significance only for Dounreay (P = 0.030). The linear risk score test did not indicate a trend in risk with distance from any of the seven sites, including Dounreay. CONCLUSIONS: There was no evidence of a generally increased risk of childhood leukaemia and non-Hodgkin's lymphoma around nuclear sites in Scotland, nor any evidence of a trend of decreasing risk with distance from any of the sites. There was a significant excess risk in the zone around Dounreay, which was only partially accounted for by the sociodemographic characteristics of the area. The statistical power of tests for localised increased risk of disease around a point source should be assessed in each new setting in which they are applied. PMID:8994402
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
The use of Landsat data to inventory cotton and soybean acreage in North Alabama
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Faust, N. L.
1980-01-01
This study was performed to determine if Landsat data could be used to improve the accuracy of the estimation of cotton acreage. A linear classification algorithm and a maximum likelihood algorithm were used for computer classification of the area, and the classification was compared with ground truth. The classification accuracy for some fields was greater than 90 percent; however, the overall accuracy was 71 percent for cotton and 56 percent for soybeans. The results of this research indicate that computer analysis of Landsat data has potential for improving upon the methods presently being used to determine cotton acreage; however, additional experiments and refinements are needed before the method can be used operationally.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve
1992-01-01
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
NASA Astrophysics Data System (ADS)
Pryor, Sara C.; Sullivan, Ryan C.; Schoof, Justin T.
2017-12-01
The static energy content of the atmosphere is increasing on a global scale, but exhibits important subglobal and subregional scales of variability and is a useful parameter for integrating the net effect of changes in the partitioning of energy at the surface and for improving understanding of the causes of so-called warming holes
(i.e., locations with decreasing daily maximum air temperatures (T) or increasing trends of lower magnitude than the global mean). Further, measures of the static energy content (herein the equivalent potential temperature, θe) are more strongly linked to excess human mortality and morbidity than air temperature alone, and have great relevance in understanding causes of past heat-related excess mortality and making projections of possible future events that are likely to be associated with negative human health and economic consequences. New nonlinear statistical models for summertime daily maximum and minimum θe are developed and used to advance understanding of drivers of historical change and variability over the eastern USA. The predictor variables are an index of the daily global mean temperature, daily indices of the synoptic-scale meteorology derived from T and specific humidity (Q) at 850 and 500 hPa geopotential heights (Z), and spatiotemporally averaged soil moisture (SM). SM is particularly important in determining the magnitude of θe over regions that have previously been identified as exhibiting warming holes, confirming the key importance of SM in dictating the partitioning of net radiation into sensible and latent heat and dictating trends in near-surface T and θe. Consistent with our a priori expectations, models built using artificial neural networks (ANNs) out-perform linear models that do not permit interaction of the predictor variables (global T, synoptic-scale meteorological conditions and SM). This is particularly marked in regions with high variability in minimum and maximum θe, where more complex models built using ANN with multiple hidden layers are better able to capture the day-to-day variability in θe and the occurrence of extreme maximum θe. Over the entire domain, the ANN with three hidden layers exhibits high accuracy in predicting maximum θe > 347 K. The median hit rate for maximum θe > 347 K is > 0.60, while the median false alarm rate is ≈ 0.08.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
Optimal full motion video registration with rigorous error propagation
NASA Astrophysics Data System (ADS)
Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn
2014-06-01
Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.
The management service company's expectation of the customer.
Kuykendall, R D
1992-02-01
The one constant factor in health care today is change. Choose your management support company carefully, expect high quality results, and communicate both positive and negative feedback immediately. This formula will give you excellent results as well as a long-term productive partnership in which cost, risk, and the end objectives can be balanced for maximum benefit.
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less
Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente
2015-01-01
A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. Copyright © 2014 Elsevier B.V. All rights reserved.
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
Utilization of the Deep Space Atomic Clock for Europa Gravitational Tide Recovery
NASA Technical Reports Server (NTRS)
Seubert, Jill; Ely, Todd
2015-01-01
Estimation of Europa's gravitational tide can provide strong evidence of the existence of a subsurface liquid ocean. Due to limited close approach tracking data, a Europa flyby mission suffers strong coupling between the gravity solution quality and tracking data quantity and quality. This work explores utilizing Low Gain Antennas with the Deep Space Atomic Clock (DSAC) to provide abundant high accuracy uplink-only radiometric tracking data. DSAC's performance, expected to exhibit an Allan Deviation of less than 3e-15 at one day, provides long-term stability and accuracy on par with the Deep Space Network ground clocks, enabling one-way radiometric tracking data with accuracy equivalent to that of its two-way counterpart. The feasibility of uplink-only Doppler tracking via the coupling of LGAs and DSAC and the expected Doppler data quality are presented. Violations of the Kalman filter's linearization assumptions when state perturbations are included in the flyby analysis results in poor determination of the Europa gravitational tide parameters. B-plane targeting constraints are statistically determined, and a solution to the linearization issues via pre-flyby approach orbit determination is proposed and demonstrated.
Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.
Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J
2009-11-01
Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.
Determination of sex from various hand dimensions of Koreans.
Jee, Soo-Chan; Bahn, Sangwoo; Yun, Myung Hwan
2015-12-01
In the case of disasters or crime scenes, forensic anthropometric methods have been utilized as a reliable way to quickly confirm the identification of victims using only a few parts of the body. A total of 321 measurement data (from 167 males and 154 females) were analyzed to investigate the suitability of detailed hand dimensions as discriminators of sex. A total of 29 variables including length, breadth, thickness, and circumference of fingers, palm, and wrist were measured. The obtained data were analyzed using descriptive statistics and t-test. The accuracy of sex indication from the hand dimensions data was found using discriminant analysis. The age effect and interaction effect according to age and sex on hand dimensions were analyzed by ANOVA. The prediction accuracy on a wide age range was also compared. According to the results, the maximum hand circumference showed the highest accuracy of 88.6% for predicting sex for males and 89.6% for females. Although the breadth, circumference, and thickness of hand parts generally showed higher accuracy than the lengths of hand parts in predicting the sex of the participant, the breadth and circumference of some finger joints showed a significant difference according to age and gender. Thus, the dimensions of hand parts which are not affected by age or gender, such as hand length, palm length, hand breadth, and maximum hand thickness, are recommended to be used first in sex determination for a wide age range group. The results suggest that the detailed hand dimensions can also be used to identify sex for better accuracy; however, the aging effects need to be considered in estimating aged suspects. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Measuring water level in rivers and lakes from lightweight Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Jakobsen, Jakob; Olesen, Daniel; Reyna-Gutierrez, Jose Antonio; Bauer-Gottwein, Peter
2017-05-01
The assessment of hydrologic dynamics in rivers, lakes, reservoirs and wetlands requires measurements of water level, its temporal and spatial derivatives, and the extent and dynamics of open water surfaces. Motivated by the declining number of ground-based measurement stations, research efforts have been devoted to the retrieval of these hydraulic properties from spaceborne platforms in the past few decades. However, due to coarse spatial and temporal resolutions, spaceborne missions have several limitations when assessing the water level of terrestrial surface water bodies and determining complex water dynamics. Unmanned Aerial Vehicles (UAVs) can fill the gap between spaceborne and ground-based observations, and provide high spatial resolution and dense temporal coverage data, in quick turn-around time, using flexible payload design. This study focused on categorizing and testing sensors, which comply with the weight constraint of small UAVs (around 1.5 kg), capable of measuring the range to water surface. Subtracting the measured range from the vertical position retrieved by the onboard Global Navigation Satellite System (GNSS) receiver, we can determine the water level (orthometric height). Three different ranging payloads, which consisted of a radar, a sonar and an in-house developed camera-based laser distance sensor (CLDS), have been evaluated in terms of accuracy, precision, maximum ranging distance and beam divergence. After numerous flights, the relative accuracy of the overall system was estimated. A ranging accuracy better than 0.5% of the range and a maximum ranging distance of 60 m were achieved with the radar. The CLDS showed the lowest beam divergence, which is required to avoid contamination of the signal from interfering surroundings for narrow fields of view. With the GNSS system delivering a relative vertical accuracy better than 3-5 cm, water level can be retrieved with an overall accuracy better than 5-7 cm.
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-01
Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958
Cavalli, Rosa Maria; Betti, Mattia; Campanelli, Alessandra; Di Cicco, Annalisa; Guglietta, Daniela; Penna, Pierluigi; Piermattei, Viviana
2014-01-01
This methodology assesses the accuracy with which remote data characterizes a surface, as a function of Full Width at Half Maximum (FWHM). The purpose is to identify the best remote data that improves the characterization of a surface, evaluating the number of bands in the spectral range. The first step creates an accurate dataset of remote simulated data, using in situ hyperspectral reflectances. The second step evaluates the capability of remote simulated data to characterize this surface. The spectral similarity measurements, which are obtained using classifiers, provide this capability. The third step examines the precision of this capability. The assumption is that in situ hyperspectral reflectances are considered the “real” reflectances. They are resized with the same spectral range of the remote data. The spectral similarity measurements which are obtained from “real” resized reflectances, are considered “real” measurements. Therefore, the quantity and magnitude of “errors” (i.e., differences between spectral similarity measurements obtained from “real” resized reflectances and from remote data) provide the accuracy as a function of FWHM. This methodology was applied to evaluate the accuracy with which CHRIS-mode1, CHRIS-mode2, Landsat5-TM, MIVIS and PRISMA data characterize three coastal waters. Their mean values of uncertainty are 1.59%, 3.79%, 7.75%, 3.15% and 1.18%, respectively. PMID:24434875
Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo
2017-01-31
To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.
Predictions of Sunspot Cycle 24: A Comparison with Observations
NASA Astrophysics Data System (ADS)
Bhatt, N. J.; Jain, R.
2017-12-01
The space weather is largely affected due to explosions on the Sun viz. solar flares and CMEs, which, however, in turn depend upon the magnitude of the solar activity i e. number of sunspots and their magnetic configuration. Owing to these space weather effects, predictions of sunspot cycle are important. Precursor techniques, particularly employing geomagnetic indices, are often used in the prediction of the maximum amplitude of a sunspot cycle. Based on the average geomagnetic activity index aa (since 1868 onwards) for the year of the sunspot minimum and the preceding four years, Bhatt et al. (2009) made two predictions for sunspot cycle 24 considering 2008 as the year of sunspot minimum: (i) The annual maximum amplitude would be 92.8±19.6 (1-sigma accuracy) indicating a somewhat weaker cycle 24 as compared to cycles 21-23, and (ii) smoothed monthly mean sunspot number maximum would be in October 2012±4 months (1-sigma accuracy). However, observations reveal that the sunspot minima extended up to 2009, and the maximum amplitude attained is 79, with a monthly mean sunspot number maximum of 102.3 in February 2014. In view of the observations and particularly owing to the extended solar minimum in 2009, we re-examined our prediction model and revised the prediction results. We find that (i) The annual maximum amplitude of cycle 24 = 71.2 ± 19.6 and (ii) A smoothed monthly mean sunspot number maximum in January 2014±4 months. We discuss our failure and success aspects and present improved predictions for the maximum amplitude as well as for the timing, which are now in good agreement with the observations. Also, we present the limitations of our forecasting in the view of long term predictions. We show if year of sunspot minimum activity and magnitude of geomagnetic activity during sunspot minimum are taken correctly then our prediction method appears to be a reliable indicator to forecast the sunspot amplitude of the following solar cycle. References:Bhatt, N.J., Jain, R. & Aggarwal, M.: 2009, Sol. Phys. 260, 225
Interplanetary Trajectories, Encke Method (ITEM)
NASA Technical Reports Server (NTRS)
Whitlock, F. H.; Wolfe, H.; Lefton, L.; Levine, N.
1972-01-01
Modified program has been developed using improved variation of Encke method which avoids accumulation of round-off errors and avoids numerical ambiguities arising from near-circular orbits of low inclination. Variety of interplanetary trajectory problems can be computed with maximum accuracy and efficiency.
Representing Functions in n Dimensions to Arbitrary Accuracy
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
2007-01-01
A method of approximating a scalar function of n independent variables (where n is a positive integer) to arbitrary accuracy has been developed. This method is expected to be attractive for use in engineering computations in which it is necessary to link global models with local ones or in which it is necessary to interpolate noiseless tabular data that have been computed from analytic functions or numerical models in n-dimensional spaces of design parameters.
Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard
Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu
2011-01-01
Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155
Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène
2015-01-01
Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, L; Ziemer, B; Sadeghi, B
Purpose: To evaluate the accuracy of dynamic CT myocardial perfusion measurement using first pass analysis (FPA) and maximum slope models. Methods: A swine animal model was prepared by percutaneous advancement of an angioplasty balloon into the proximal left anterior descending (LAD) coronary artery to induce varying degrees of stenosis. Maximal hyperaemia was achieved in the LAD with an intracoronary adenosine drip (240 µg/min). Serial microsphere and contrast (370 mg/mL iodine, 30 mL, 5mL/s) injections were made over a range of induced stenoses, and dynamic imaging was performed using a 320-row CT scanner at 100 kVp and 200 mA. The FPAmore » CT perfusion technique was used to make vessel-specific myocardial perfusion measurements. CT perfusion measurements using the FPA and maximum slope models were validated using colored microspheres as the reference gold standard. Results: Perfusion measurements using the FPA technique (P-FPA) showed good correlation with minimal offset when compared to perfusion measurements using microspheres (P- Micro) as the reference standard (P -FPA = 0.96 P-Micro + 0.05, R{sup 2} = 0.97, RMSE = 0.19 mL/min/g). In contrast, the maximum slope model technique (P-MS) was shown to underestimate perfusion when compared to microsphere perfusion measurements (P-MS = 0.42 P -Micro −0.48, R{sup 2} = 0.94, RMSE = 3.3 mL/min/g). Conclusion: The results indicate the potential for significant improvements in accuracy of dynamic CT myocardial perfusion measurement using the first pass analysis technique as compared with the standard maximum slope model.« less
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Social Eavesdropping: Can You Hear the Emotionality in a "Hello" That Is Not Meant for You?
Karthikeyan, Sethu; Ramachandra, Vijayachandra
2017-01-01
The study examined third-party listeners' ability to detect the Hellos spoken to prevalidated happy, neutral, and sad facial expressions. The average detection accuracies from the happy and sad (HS), happy and neutral (HN), and sad and neutral (SN) listening tests followed the average vocal pitch differences between the two sets of Hellos in each of the tests; HS and HN detection accuracies were above chance reflecting the significant pitch differences between the respective Hellos. The SN detection accuracy was at chance reflecting the lack of pitch difference between sad and neutral Hellos. As expected, the SN detection accuracy positively correlated with theory of mind; participating in these tests has been likened to the act of eavesdropping, which has been discussed from an evolutionary perspective. An unexpected negative correlation between the HS detection accuracy and the empathy quotient has been discussed with respect to autism research on empathy and pitch discrimination.
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
Novel system for automatic measuring diopter based on ARM circuit block
NASA Astrophysics Data System (ADS)
Xue, Feng; Zhong, Lei; Chen, Zhe; Xue, Deng-pan; Li, Xiang-ning
2009-07-01
Traditional commercial instruments utilized in vision screening programs cannot satisfy the request for real-time diopter measurement by far, and their success is limited by some defectiveness such as computer-attached, clumsy volume, and low accuracy of parameters measured, etc. In addition, astigmatic eyes cannot be determined in many devices. This paper proposes a new design of diopter measurement system based on SAMSUNG's ARM9 circuit block. There are several contributions in the design. The new developed system has not only the function of automatically measuring diopter, but also the advantages of the low cost, and especially the simplicity and portability. Besides, by placing point sources in three directions, the instrument can determine astigmatic eyes at the same time. Most of the details are introduced as the integrated design of measuring system, interface circuit of embedded system and so on. Through a preliminary experiment, it is proved that the system keeps good feasibility and validity. The maximum deviation of measurement result is 0.344D.The experimental results also demonstrate the system can provide the service needed for real-time applications. The instrument present here is expected to be widely applied in many fields such as the clinic and home healthcare.
NASA Astrophysics Data System (ADS)
Dewi Ratih, Iis; Sutijo Supri Ulama, Brodjol; Prastuti, Mike
2018-03-01
Value at Risk (VaR) is one of the statistical methods used to measure market risk by estimating the worst losses in a given time period and level of confidence. The accuracy of this measuring tool is very important in determining the amount of capital that must be provided by the company to cope with possible losses. Because there is a greater losses to be faced with a certain degree of probability by the greater risk. Based on this, VaR calculation analysis is of particular concern to researchers and practitioners of the stock market to be developed, thus getting more accurate measurement estimates. In this research, risk analysis of stocks in four banking sub-sector, Bank Rakyat Indonesia, Bank Mandiri, Bank Central Asia and Bank Negara Indonesia will be done. Stock returns are expected to be influenced by exogenous variables, namely ICI and exchange rate. Therefore, in this research, stock risk estimation are done by using VaR ARMAX-GARCHX method. Calculating the VaR value with the ARMAX-GARCHX approach using window 500 gives more accurate results. Overall, Bank Central Asia is the only bank had the estimated maximum loss in the 5% quantile.
Stress-strain relationship of PDMS micropillar for force measurement application
NASA Astrophysics Data System (ADS)
Johari, Shazlina; Shyan, L. Y.
2017-11-01
There is an increasing interest to use polydimethylsiloxane (PDMS) based materials as bio-transducers for force measurements in the order of micro to nano Newton. The accuracy of these devices relies on appropriate material characterization of PDMS and modelling to convert the micropillar deformations into the corresponding forces. Previously, we have reported on fabricated PDMS micropillar that acts as a cylindrical cantilever and was experimentally used to measure the force of the nematode C. elegans. In this research, similar PDMS micropillars are designed and simulated using ANSYS software. The simulation involves investigating two main factors that is expected to affect the force measurement performance; pillar height and diameter. Results show that the deformation increases when pillar height is increased and the deformation is inversely proportional to the pillar diameter. The maximum deformation obtained is 713 um with pillar diameter of 20 um and pillar height of 100 um. Results of stress and strain show similar pattern, where their values decreases as pillar diameter and height is increased. The simulated results are also compared with the calculated displacement. The trend for both calculated and simulated values are similar with 13% average difference.
Variability of AVHRR-Derived Clear-Sky Surface Temperature over the Greenland Ice Sheet.
NASA Astrophysics Data System (ADS)
Stroeve, Julienne; Steffen, Konrad
1998-01-01
The Advanced Very High Resolution Radiometer is used to derive surface temperatures for one satellite pass under clear skies over the Greenland ice sheet from 1989 through 1993. The results of these temperatures are presented as monthly means, and their spatial and temporal variability are discussed. Accuracy of the dry snow surface temperatures is estimated to be better than 1 K during summer. This error is expected to increase during polar night due to problems in cloud identification. Results indicate the surface temperature of the Greenland ice sheet is strongly dominated by topography, with minimum surface temperatures associated with the high elevation regions. In the summer, maximum surface temperatures occur during July along the western coast and southern tip of the ice sheet. Minimum temperatures are found at the summit during summer and move farther north during polar night. Large interannual variability in surface temperatures occurs during winter associated with katabatic storm events. Summer temperatures show little variation, although 1992 stands out as being colder than the other years. The reason for the lower temperatures during 1992 is believed to be a result of the 1991 eruption of Mount Pinatubo.
NASA Astrophysics Data System (ADS)
Saran, Sameer; Sterk, Geert; Kumar, Suresh
2007-10-01
Land use/cover is an important watershed surface characteristic that affects surface runoff and erosion. Many of the available hydrological models divide the watershed into Hydrological Response Units (HRU), which are spatial units with expected similar hydrological behaviours. The division into HRU's requires good-quality spatial data on land use/cover. This paper presents different approaches to attain an optimal land use/cover map based on remote sensing imagery for a Himalayan watershed in northern India. First digital classifications using maximum likelihood classifier (MLC) and a decision tree classifier were applied. The results obtained from the decision tree were better and even improved after post classification sorting. But the obtained land use/cover map was not sufficient for the delineation of HRUs, since the agricultural land use/cover class did not discriminate between the two major crops in the area i.e. paddy and maize. Therefore we adopted a visual classification approach using optical data alone and also fused with ENVISAT ASAR data. This second step with detailed classification system resulted into better classification accuracy within the 'agricultural land' class which will be further combined with topography and soil type to derive HRU's for physically-based hydrological modelling.
Accuracy improvement of multimodal measurement of speed of sound based on image processing
NASA Astrophysics Data System (ADS)
Nitta, Naotaka; Kaya, Akio; Misawa, Masaki; Hyodo, Koji; Numano, Tomokazu
2017-07-01
Since the speed of sound (SOS) reflects tissue characteristics and is expected as an evaluation index of elasticity and water content, the noninvasive measurement of SOS is eagerly anticipated. However, it is difficult to measure the SOS by using an ultrasound device alone. Therefore, we have presented a noninvasive measurement method of SOS using ultrasound (US) and magnetic resonance (MR) images. By this method, we determine the longitudinal SOS based on the thickness measurement using the MR image and the time of flight (TOF) measurement using the US image. The accuracy of SOS measurement is affected by the accuracy of image registration and the accuracy of thickness measurements in the MR and US images. In this study, we address the accuracy improvement in the latter thickness measurement, and present an image-processing-based method for improving the accuracy of thickness measurement. The method was investigated by using in vivo data obtained from a tissue-engineered cartilage implanted in the back of a rat, with an unclear boundary.
Optical coherence tomography for glucose monitoring in blood
NASA Astrophysics Data System (ADS)
Ullah, Hafeez; Hussain, Fayyaz; Ikram, Masroor
2015-08-01
In this review, we have discussed the potential application of the emerging imaging modality, i.e., optical coherence tomography (OCT) for glucose monitoring in biological tissues. OCT provides monitoring of glucose diffusion in different fibrous tissues like in sclera by determining the permeability rate with acceptable accuracy both in type 1 and in type 2 diabetes. The maximum precision of glucose measurement in Intralipid suspensions, for example, with the OCT technique yields the accuracy up to 4.4 mM for 10 % Intralipid and 2.2 mM for 3 % Intralipid.
NASA Astrophysics Data System (ADS)
Richter, J.; Mayer, J.; Weigand, B.
2018-02-01
Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
Use of the Brainlab Disposable Stylet for endoscope and peel-away navigation.
Halliday, Jane; Kamaly, Ian
2016-12-01
Neuronavigation, the ability to perform real-time intra-operative guidance during cranial and/or spinal surgery, has increased both accuracy and safety in neurosurgery [2]. Cranial navigation of existing surgical instruments using Brainlab requires the use of an instrument adapter and clamp, which in our experience renders an endoscope 'top-heavy', difficult to manipulate, and the process of registration of the adapter quite time-consuming. A Brainlab Disposable Stylet was used to navigate fenestration of an entrapped temporal horn in a pediatric case. Accuracy was determined by target visualization relative to neuronavigation targeting. Accuracy was also calculated using basic trigonometry to establish the maximum tool tip inaccuracy for the disposible stylet inserted into a peel-away (Codman) and endoscope. The Brainlab Disposable Stylet was easier to use, more versatile, and as accurate as use of an instrument adapter and clamp. The maximum tool-tip inaccuracy for the endoscope was 0.967 mm, and the Codman peel-away 0.489 mm. A literature review did not reveal any reports of use of the Brainlab Disposable Stylet in this way, and we are unaware of this being used in common neurosurgical practice. We would recommend this technique in endoscopic cases that require use of Brainlab navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sittler, O.D.; Agogino, M.M.
1979-05-01
This project was undertaken to improve the data base for estimating solar energy influx in eastern New Mexico. A precision pyranometer station has been established at Eastern New Mexico University in Portales. A program of careful calibration and data management procedures is conducted to maintain high standards of precision and accuracy. Data from the first year of operation were used to upgrade insolation data of moderate accuracy which had been obtained at this site with an inexpensive pyranograph. Although not as accurate as the data expected from future years of operation of this station, these upgraded pyranograph measurements show thatmore » eastern New Mexico receives somewhat less solar energy than would be expected from published data. A detailed summary of these upgraded insolation data is included.« less
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
A Neuro-Fuzzy Approach in the Classification of Students' Academic Performance
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions. PMID:24302928
A neuro-fuzzy approach in the classification of students' academic performance.
Do, Quang Hung; Chen, Jeng-Fung
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions.
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
A resolution measure for three-dimensional microscopy
Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.
2009-01-01
A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Astrophysics Data System (ADS)
He, Xiangge; Xie, Shangran; Cao, Shan; Liu, Fei; Zheng, Xiaoping; Zhang, Min; Yan, Han; Chen, Guocai
2016-11-01
The properties of noise induced by stimulated Brillouin scattering (SBS) in long-range interferometers and their influences on the positioning accuracy of dual Mach-Zehnder interferometric (DMZI) vibration sensing systems are studied. The SBS noise is found to be white and incoherent between the two arms of the interferometer in a 1-MHz bandwidth range. Experiments on 25-km long fibers show that the root mean square error (RMSE) of the positioning accuracy is consistent with the additive noise model for the time delay estimation theory. A low-pass filter can be properly designed to suppress the SBS noise and further achieve a maximum RMSE reduction of 6.7 dB.
NASA Astrophysics Data System (ADS)
Zhang, Yujia; Yilmaz, Alper
2016-06-01
Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.
Precise Orbit Determination for ALOS
NASA Technical Reports Server (NTRS)
Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji
2007-01-01
The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Variation in Size and Form between Left and Right Maxillary Central Incisor Teeth.
Vadavadagi, Suneel V; Hombesh, M N; Choudhury, Gopal Krishna; Deshpande, Sumith; Anusha, C V; Murthy, D Kiran
2015-02-01
To compare the variation in size of left and right maxillary central incisors for male patients (using digital calipers of 0.01 mm accuracy). To compare the variation in size of left and right maxillary central incisors for female patients (using digital calipers of 0.01 mm accuracy). To find out the difference between the maxillary central incisors of men and women. Its clinical applicability if difference exists. A total of 70 dental students of PMNM Dental College and Hospital were selected. Of 70 dental students, 40 male and 30 female were selected. Impressions were made for all subjects, using irreversible hydrocolloid (Algitex, manufacturer DPI, Batch-T-8804) using perforated stock metal trays. The mesiodistal crown width and cervical width were measured for each incisor and recorded separately for left and right teeth. The length was measured for each incisor and recorded separately for left and right maxillary central incisor using digitec height caliper. The mean value of maximum crown length of maxillary left central incisor of male was greater in length compared with maxillary right central incisor. Mean value of maximum crown length for male patient right and left side was greater compared with maximum crown length of female patient. When compared the dimensions of teeth between two sex, male group shows larger values to female group.
Unrealistic optimism in advice taking: A computational account.
Leong, Yuan Chang; Zaki, Jamil
2018-02-01
Expert advisors often make surprisingly inaccurate predictions about the future, yet people heed their suggestions nonetheless. Here we provide a novel, computational account of this unrealistic optimism in advice taking. Across 3 studies, participants observed as advisors predicted the performance of a stock. Advisors varied in their accuracy, performing reliably above, at, or below chance. Despite repeated feedback, participants exhibited inflated perceptions of advisors' accuracy, and reliably "bet" on advisors' predictions more than their performance warranted. Participants' decisions tightly tracked a computational model that makes 2 assumptions: (a) people hold optimistic initial expectations about advisors, and (b) people preferentially incorporate information that adheres to their expectations when learning about advisors. Consistent with model predictions, explicitly manipulating participants' initial expectations altered their optimism bias and subsequent advice-taking. With well-calibrated initial expectations, participants no longer exhibited an optimism bias. We then explored crowdsourced ratings as a strategy to curb unrealistic optimism in advisors. Star ratings for each advisor were collected from an initial group of participants, which were then shown to a second group of participants. Instead of calibrating expectations, these ratings propagated and exaggerated the unrealistic optimism. Our results provide a computational account of the cognitive processes underlying inflated perceptions of expertise, and explore the boundary conditions under which they occur. We discuss the adaptive value of this optimism bias, and how our account can be extended to explain unrealistic optimism in other domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Baker, Erik Reese
A repeated-measures, within-subjects design was conducted on 58 participant pilots to assess mean differences on energy management situation awareness response time and response accuracy between a conventional electronic aircraft display, a primary flight display (PFD), and an ecological interface design aircraft display, the OZ concept display. Participants were associated with a small Midwestern aviation university, including student pilots, flight instructors, and faculty with piloting experience. Testing consisted of observing 15 static screenshots of each cockpit display type and then selecting applicable responses from 27 standardized responses for each screen. A paired samples t-test was computed comparing accuracy and response time for the two displays. There was no significant difference in means between PFD Response Time and OZ Response Time. On average, mean PFD Accuracy was significantly higher than mean OZ Accuracy (MDiff = 13.17, SDDiff = 20.96), t(57) = 4.78, p < .001, d = 0.63. This finding showed operational potential for the OZ display, since even without first training to proficiency on the previously unseen OZ display, participant performance differences were not operationally remarkable. There was no significant correlation between PFD Response Time and PFD Accuracy, but there was a significant correlation between OZ Response Time and OZ Accuracy, r (58) = .353, p < .01. These findings suggest the participant familiarity of the PFD resulted in accuracy scores unrelated to response time, compared to the participants unaccustomed with the OZ display where longer response times manifested in greater understanding of the OZ display. PFD Response Time and PFD Accuracy were not correlated with pilot flight hours, which was not expected. It was thought that increased experience would translate into faster and more accurate assessment of the aircraft stimuli. OZ Response Time and OZ Accuracy were also not correlated with pilot flight hours, but this was expected. This was consistent with previous research that observed novice operators performing as well as experienced professional pilots on dynamic flight tasks with the OZ display. A demographic questionnaire and a feedback survey were included in the trial. An equivalent three-quarters majority of participants rated the PFD as "easy" and the OZ as "confusing", yet performance accuracy and response times between the two displays were not operationally different.
Xu, Tianmin
2012-06-01
Just like other subjects in medicine, orthodontics also uses some vague concepts to describe what are difficult to measure quantitatively. Anchorage control is one of them. With the development of evidence-based medicine, orthodontists pay more and more attention to the accuracy of the clinical evidence. The empirical description of anchorage control is showing inadequacy in modern orthodontics. This essay, based on author's recent series of studies on anchorage control, points out the inaccuracy of maximum anchorage concept, commonly neglected points in quantitative measurement of anchorage loss and the solutions. It also discusses the limitation of maximum anchorage control.
Experimental analysis of robot-assisted needle insertion into porcine liver.
Wang, Wendong; Shi, Yikai; Goldenberg, Andrew A; Yuan, Xiaoqing; Zhang, Peng; He, Lijing; Zou, Yingjie
2015-01-01
How to improve placement accuracy of needle insertion into liver tissue is of paramount interest to physicians. A robot-assisted system was developed to experimentally demonstrate its advantages in needle insertion surgeries. Experiments of needle insertion into porcine liver tissue were performed with conic tip needle (diameter 8 mm) and bevel tip needle (diameter 1.5 mm) in this study. Manual operation was designed to compare the performance of the presented robot-assisted system. The real-time force curves show outstanding advantages of robot-assisted operation in improving the controllability and stability of needle insertion process by comparing manual operation. The statistics of maximum force and average force further demonstrates robot-assisted operation causes less oscillation. The difference of liver deformation created by manual operation and robot-assisted operation is very low, 1 mm for average deformation and 2 mm for maximum deformation. To conclude, the presented robot-assisted system can improve placement accuracy of needle by stably control insertion process.
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Limei; Lei, Lei; Li, Feng
2017-10-01
A microfluidic-based multi-angle laser scattering (MALS) system capable of acquiring scattering patterns of a single particle is designed and demonstrated. The system includes a sheathless nozzle microfluidic glass chip, and an on-chip MALS unit being in alignment with the nozzle exit in the chip. The size and relative refractive indices (RI) of polystyrene (PS) microspheres were deduced with accuracies of 60 nm and 0.002 by comparing the experimental scattering patterns with theoretical ones. We measured scattering patterns of waterborne parasites i.e., Cryptosporidium parvum (C.parvum) and Giardia lamblia (G. lamblia), and some other representative species suspended in deionized water at a maximum flow rate of 12 μL/min, and a maximum of 3000 waterborne parasites can be identified within one minute with a mean accuracy higher than 96% by classification of distinctive scattering patterns using a support-vector-machine (SVM) algorithm. The system provides a promising tool for label-free detection of waterborne parasites and other biological contaminants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoegele, W.; Loeschel, R.; Dobler, B.
2011-02-15
Purpose: In this work, a novel stochastic framework for patient positioning based on linac-mounted CB projections is introduced. Based on this formulation, the most probable shifts and rotations of the patient are estimated, incorporating interfractional deformations of patient anatomy and other uncertainties associated with patient setup. Methods: The target position is assumed to be defined by and is stochastically determined from positions of various features such as anatomical landmarks or markers in CB projections, i.e., radiographs acquired with a CB-CT system. The patient positioning problem of finding the target location from CB projections is posed as an inverse problem withmore » prior knowledge and is solved using a Bayesian maximum a posteriori (MAP) approach. The prior knowledge is three-fold and includes the accuracy of an initial patient setup (such as in-room laser and skin marks), the plasticity of the body (relative shifts between target and features), and the feature detection error in CB projections (which may vary depending on specific detection algorithm and feature type). For this purpose, MAP estimators are derived and a procedure of using them in clinical practice is outlined. Furthermore, a rule of thumb is theoretically derived, relating basic parameters of the prior knowledge (initial setup accuracy, plasticity of the body, and number of features) and the parameters of CB data acquisition (number of projections and accuracy of feature detection) to the expected estimation accuracy. Results: MAP estimation can be applied to arbitrary features and detection algorithms. However, to experimentally demonstrate its applicability and to perform the validation of the algorithm, a water-equivalent, deformable phantom with features represented by six 1 mm chrome balls were utilized. These features were detected in the cone beam projections (XVI, Elekta Synergy) by a local threshold method for demonstration purposes only. The accuracy of estimation (strongly varying for different plasticity parameters of the body) agreed with the rule of thumb formula. Moreover, based on this rule of thumb formula, about 20 projections for 6 detectable features seem to be sufficient for a target estimation accuracy of 0.2 cm, even for relatively large feature detection errors with standard deviation of 0.5 cm and spatial displacements of the features with standard deviation of 0.5 cm. Conclusions: The authors have introduced a general MAP-based patient setup algorithm accounting for different sources of uncertainties, which are utilized as the prior knowledge in a transparent way. This new framework can be further utilized for different clinical sites, as well as theoretical developments in the field of patient positioning for radiotherapy.« less
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
1982-06-01
these conditions . Therefore, the no-action plan is not being recommended. With the present channel conditions and anticipated slow growth for... conditions very similar to those of today. No ocean disposal is anticipated. Because no significant growth in socioeconomic activity is expected without...accuracy; however, project construction provides conditions favorable for growth . The service industry would be expected to closely follow any increase
Modeling and forecasting U.S. sex differentials in mortality.
Carter, L R; Lee, R D
1992-11-01
"This paper examines differentials in observed and forecasted sex-specific life expectancies and longevity in the United States from 1900 to 2065. Mortality models are developed and used to generate long-run forecasts, with confidence intervals that extend recent work by Lee and Carter (1992). These results are compared for forecast accuracy with univariate naive forecasts of life expectancies and those prepared by the Actuary of the Social Security Administration." excerpt
Biological Effects of Short, High-Level Exposure to Gases: Carbon Monoxide.
1980-06-01
Selection of maximum allowable concentrations for use in the military should be based more appropriately on considerations of casualty prevention (i.e...base selection of maximum allow- able concentrations for use in the military upon considerations of casualty prevention (i.e., immediate incapacitation...which is reversible, and that no injury will result from the exposure. Persons with myocardial disease are expected to be the most susceptible. These
Prediction of Mortality Based on Facial Characteristics
Delorme, Arnaud; Pierce, Alan; Michel, Leena; Radin, Dean
2016-01-01
Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 s. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail). Statistically significant accuracy was independently obtained in 5 of the 12 participants. We also collected 32-channel electrophysiological recordings and observed a robust difference between images of deceased individuals correctly vs. incorrectly classified in the early event related potential (ERP) at 100 ms post-stimulus onset. Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance warrants further investigation. PMID:27242466
Application of diffusion kurtosis imaging to odontogenic lesions: Analysis of the cystic component.
Sakamoto, Junichiro; Kuribayashi, Ami; Kotaki, Shinya; Fujikura, Mamiko; Nakamura, Shin; Kurabayashi, Tohru
2016-12-01
To assess the feasibility of applying diffusion kurtosis imaging (DKI) to common odontogenic lesions and to compare its diagnostic ability versus that of the apparent diffusion coefficient (ADC) for differentiating keratocystic odontogenic tumors (KCOTs) from odontogenic cysts. Altogether, 35 odontogenic lesions were studied: 24 odontogenic cysts, six KCOTs, and five ameloblastomas. The diffusion coefficient (D) and excessive kurtosis (K) were obtained from diffusion-weighted images at b-values of 0, 500, 1000, and 1500 s/mm 2 on 3T magnetic resonance imaging (MRI). The combination of D and K values showing the maximum density of the probable density function was estimated. The ADC was obtained (0 and 1000 s/mm 2 ). Values for odontogenic cysts, KCOTs, and ameloblastomas were compared. Multivariate logistic regression modeling was performed to assess the combination of D and K model versus ADC for differentiating KCOTs from odontogenic cysts. The mean D and ADC were significantly higher for ameloblastomas than for odontogenic cysts or KCOTs (P < 0.05). The mean K was significantly lower for ameloblastomas than for odontogenic cysts or KCOTs (P < 0.05). The mean values of all parameters for odontogenic cysts and KCOTs showed no significant differences (P = 0.369 for ADC, 0.133 for D, and 0.874 for K). The accuracy of the combination of D and K model (76.7%) was superior to that of ADC (66.7%). Use of DKI may be feasible for common odontogenic lesions. A combination of DKI parameters can be expected to increase the accuracy of its diagnostic ability compared with ADC. J. Magn. Reson. Imaging 2016;44:1565-1571. © 2016 International Society for Magnetic Resonance in Medicine.
Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C
2008-03-01
The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.
Drain data to predict clinically relevant pancreatic fistula
Moskovic, Daniel J; Hodges, Sally E; Wu, Meng-Fen; Brunicardi, F Charles; Hilsenbeck, Susan G; Fisher, William E
2010-01-01
Background Post-operative pancreatic fistula (POPF) is a common and potentially devastating complication of pancreas resection. Management of this complication is important to the pancreas surgeon. Objective The aim of the present study was to evaluate whether drain data accurately predicts clinically significant POPF. Methods A prospectively maintained database with daily drain amylase concentrations and output volumes from 177 consecutive pancreatic resections was analysed. Drain data, demographic and operative data were correlated with POPF (ISGPF Grade: A – clinically silent, B – clinically evident, C – severe) to determine predictive factors. Results Twenty-six (46.4%) out of 56 patients who underwent distal pancreatectomy and 52 (43.0%) out of 121 patients who underwent a Whipple procedure developed a POPF (Grade A-C). POPFs were classified as A (24, 42.9%) and C (2, 3.6%) after distal pancreatectomy whereas they were graded as A (35, 28.9%), B (15, 12.4%) and C (2, 1.7%) after Whipple procedures. Drain data analysis was limited to Whipple procedures because only two patients developed a clinically significant leak after distal pancreatectomy. The daily total drain output did not differ between patients with a clinical leak (Grades B/C) and patients without a clinical leak (no leak and Grade A) on post-operative day (POD) 1 to 7. Although the median amylase concentration was significantly higher in patients with a clinical leak on POD 1–6, there was no day that amylase concentration predicted a clinical leak better than simply classifying all patients as ‘no leak’ (maximum accuracy =86.1% on POD 1, expected accuracy by chance =85.6%, kappa =10.2%). Conclusion Drain amylase data in the early post-operative period are not a sensitive or specific predictor of which patients will develop clinically significant POPF after pancreas resection. PMID:20815856
SU-C-207A-04: Accuracy of Acoustic-Based Proton Range Verification in Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, KC; Sehgal, CM; Avery, S
2016-06-15
Purpose: To determine the accuracy and dose required for acoustic-based proton range verification (protoacoustics) in water. Methods: Proton pulses with 17 µs FWHM and instantaneous currents of 480 nA (5.6 × 10{sup 7} protons/pulse, 8.9 cGy/pulse) were generated by a clinical, hospital-based cyclotron at the University of Pennsylvania. The protoacoustic signal generated in a water phantom by the 190 MeV proton pulses was measured with a hydrophone placed at multiple known positions surrounding the dose deposition. The background random noise was measured. The protoacoustic signal was simulated to compare to the experiments. Results: The maximum protoacoustic signal amplitude at 5more » cm distance was 5.2 mPa per 1 × 10{sup 7} protons (1.6 cGy at the Bragg peak). The background random noise of the measurement was 27 mPa. Comparison between simulation and experiment indicates that the hydrophone introduced a delay of 2.4 µs. For acoustic data collected with a signal-to-noise ratio (SNR) of 21, deconvolution of the protoacoustic signal with the proton pulse provided the most precise time-of-flight range measurement (standard deviation of 2.0 mm), but a systematic error (−4.5 mm) was observed. Conclusion: Based on water phantom measurements at a clinical hospital-based cyclotron, protoacoustics is a potential technique for measuring the proton Bragg peak range with 2.0 mm standard deviation. Simultaneous use of multiple detectors is expected to reduce the standard deviation, but calibration is required to remove systematic error. Based on the measured background noise and protoacoustic amplitude, a SNR of 5.3 is projected for a deposited dose of 2 Gy.« less
Data and performances of selected aircraft and rotorcraft
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2000-11-01
The purpose of this article is to provide a synthetic and comparative view of selected aircraft and rotorcraft (nearly 300 of them) from past and present. We report geometric characteristics of wings (wing span, areas, aspect-ratios, sweep angles, dihedral/anhedral angles, thickness ratios at root and tips, taper ratios) and rotor blades (type of rotor, diameter, number of blades, solidity, rpm, tip Mach numbers); aerodynamic data (drag coefficients at zero lift, cruise and maximum absolute glide ratio); performances (wing and disk loadings, maximum absolute Mach number, cruise Mach number, service ceiling, rate of climb, centrifugal acceleration limits, maximum take-off weight, maximum payload, thrust-to-weight ratios). There are additional data on wing types, high-lift devices, noise levels at take-off and landing. The data are presented on tables for each aircraft class. A graphic analysis offers a comparative look at all types of data. Accuracy levels are provided wherever available.
Estimation of basal shear stresses from now ice-free LIA glacier forefields in the Swiss Alps
NASA Astrophysics Data System (ADS)
Fischer, Mauro; Haeberli, Wilfried; Huss, Matthias; Paul, Frank; Linsbauer, Andreas; Hoelzle, Martin
2013-04-01
In most cases, assessing the impacts of climatic changes on glaciers requires knowledge about the ice thickness distribution. Miscellaneous methodological approaches with different degrees of sophistication have been applied to model glacier thickness so far. However, all of them include significant uncertainty. By applying a parameterization scheme for ice thickness determination relying on assumptions about basal shear stress by Haeberli and Hoelzle (1995) to now ice-free glacier forefields in the Swiss Alps, basal shear stress values can be calculated based on a fast and robust experimental approach. In a GIS, the combination of recent (1973) and Little Ice Age (LIA) maximum (around 1850) glacier outlines, central flowlines, a recent Digital Elevation Model (DEM) and a DEM of glacier surface topography for the LIA maximum allows extracting local ice thickness over the forefield of individual glaciers. Subsequently, basal shear stress is calculated via the rheological assumption of perfect-plasticity relating ice thickness and surface slope to shear stress. The need of only very few input data commonly stored in glacier inventories permits an application to a large number of glaciers. Basal shear stresses are first calculated for subsamples of glaciers belonging to two test sites where the LIA maximum glacier surface is modeled with DEMs derived from accurate topographic maps for the mid 19th century. Neglecting outliers, the average resulting mean basal shear stress is around 80 kPa for the Bernina region (range 25-100 kPa) and 120 kPa (range 50-150 kPa) for the Aletsch region. For the entire Swiss Alps it is 100 kPa (range 40-175 kPa). Because complete LIA glacier surface elevation information is lacking there, a DEM is first created from reconstructed height of LIA lateral moraines and trimlines by using a simple GIS-based tool. A sensitivity analysis of the input parameters reveals that the performance of the developed approach primarily depends on the accuracy of the ice thickness determination and thus on the accuracy of the LIA DEMs used. Good results are expected for LIA valley or mountain glaciers with ice thicknesses larger than 100 m at the position of their terminus in 1973. Calculated shear stresses are representative in terms of average values over 20 to 40% of the total glacier length in 1850. Shear stresses strongly vary with glacier size, topographic conditions and climate. This study confirmed that reasonable values for mean basal shear stress of mountain glaciers can be estimated from an empirical and non-linear relation using the vertical extent as a proxy for mass turnover. The now available database could be used to independently test the plausibility of approaches applying simple flow models.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Tataru, Paula; Hobolth, Asger
2011-12-05
Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.
Very accurate upward continuation to low heights in a test of non-Newtonian theory
NASA Technical Reports Server (NTRS)
Romaides, Anestis J.; Jekeli, Christopher
1989-01-01
Recently, gravity measurements were made on a tall, very stable television transmitting tower in order to detect a non-Newtonian gravitational force. This experiment required the upward continuation of gravity from the Earth's surface to points as high as only 600 m above ground. The upward continuation was based on a set of gravity anomalies in the vicinity of the tower whose data distribution exhibits essential circular symmetry and appropriate radial attenuation. Two methods were applied to perform the upward continuation - least-squares solution of a local harmonic expansion and least-squares collocation. Both methods yield comparable results, and have estimated accuracies on the order of 50 microGal or better (1 microGal = 10(exp -8) m/sq s). This order of accuracy is commensurate with the tower gravity measurments (which have an estimated accuracy of 20 microGal), and enabled a definitive detection of non-Newtonian gravity. As expected, such precise upward continuations require very dense data near the tower. Less expected was the requirement of data (though sparse) up to 220 km away from the tower (in the case that only an ellipsoidal reference gravity is applied).
Uranium Measurement Improvements at the Savannah River Technology Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shick, C. Jr.
Uranium isotope ratio and isotope dilution methods by mass spectrometry are used to achieve sensitivity, precision and accuracy for various applications. This report presents recent progress made at SRTC in the analysis of minor isotopes of uranium. Comparison of routine measurements of NBL certified uranium (U005a) using the SRTC Three Stage Mass Spectrometer (3SMS) and the SRTC Single Stage Mass Spectrometer (SSMS). As expected, the three stage mass spectrometer yielded superior sensitivity, precision, and accuracy for this application.
Age Differences in Day-To-Day Speed-Accuracy Tradeoffs: Results from the COGITO Study.
Ghisletta, Paolo; Joly-Burra, Emilie; Aichele, Stephen; Lindenberger, Ulman; Schmiedek, Florian
2018-04-23
We examined adult age differences in day-to-day adjustments in speed-accuracy tradeoffs (SAT) on a figural comparison task. Data came from the COGITO study, with over 100 younger and 100 older adults, assessed for over 100 days. Participants were given explicit feedback about their completion time and accuracy each day after task completion. We applied a multivariate vector auto-regressive model of order 1 to the daily mean reaction time (RT) and daily accuracy scores together, within each age group. We expected that participants adjusted their SAT if the two cross-regressive parameters from RT (or accuracy) on day t-1 of accuracy (or RT) on day t were sizable and negative. We found that: (a) the temporal dependencies of both accuracy and RT were quite strong in both age groups; (b) younger adults showed an effect of their accuracy on day t-1 on their RT on day t, a pattern that was in accordance with adjustments of their SAT; (c) older adults did not appear to adjust their SAT; (d) these effects were partly associated with reliable individual differences within each age group. We discuss possible explanations for older adults' reluctance to recalibrate speed and accuracy on a day-to-day basis.
Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4×4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model‐based algorithms, X‐ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS‐Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth‐of‐dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth‐dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1×1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1×1 cm2 field showed maximum deviation, except in 6 MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower‐density materials compared to high‐density materials. PACS numbers: 87.53.Bn, 87.53.kn, 87.56.bd, 87.55.Kd, 87.56.jf PMID:26894345
Evaluating online diagnostic decision support tools for the clinical setting.
Pryor, Marie; White, David; Potter, Bronwyn; Traill, Roger
2012-01-01
Clinical decision support tools available at the point of care are an effective adjunct to support clinicians to make clinical decisions and improve patient outcomes. We developed a methodology and applied it to evaluate commercially available online clinical diagnostic decision support (DDS) tools for use at the point of care. We identified 11 commercially available DDS tools and assessed these against an evaluation instrument that included 6 categories; general information, content, quality control, search, clinical results and other features. We developed diagnostically challenging clinical case scenarios based on real patient experience that were commonly missed by junior medical staff. The evaluation was divided into 2 phases; an initial evaluation of all identified and accessible DDS tools conducted by the Clinical Information Access Portal (CIAP) team and a second phase that further assessed the top 3 tools identified in the initial evaluation phase. An evaluation panel consisting of senior and junior medical clinicians from NSW Health conducted the second phase. Of the eleven tools that were assessed against the evaluation instrument only 4 tools completely met the DDS definition that was adopted for this evaluation and were able to produce a differential diagnosis. From the initial phase of the evaluation 4 DDS tools scored 70% or more (maximum score 96%) for the content category, 8 tools scored 65% or more (maximum 100%) for the quality control category, 5 tools scored 65% or more (maximum 94%) for the search category, and 4 tools score 70% or more (maximum 81%) for the clinical results category. The second phase of the evaluation was focused on assessing diagnostic accuracy for the top 3 tools identified in the initial phase. Best Practice ranked highest overall against the 6 clinical case scenarios used. Overall the differentiating factor between the top 3 DDS tools was determined by diagnostic accuracy ranking, ease of use and the confidence and credibility of the clinical information. The evaluation methodology used here to assess the quality and comprehensiveness of clinical DDS tools was effective in identifying the most appropriate tool for the clinical setting. The use of clinical case scenarios is fundamental in determining the diagnostic accuracy and usability of the tools.
NASA Astrophysics Data System (ADS)
Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn
2016-06-01
Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled with terrain variables produced better result, with the higher overall accuracy and kappa coefficient than first experiment. The results indicate that the Maximum Entropy method is an applicable, and to classify tree species using satellite imagery data coupled with terrain information can improve the classification of tree species in the study area.
Balancing the popularity bias of object similarities for personalised recommendation
NASA Astrophysics Data System (ADS)
Hou, Lei; Pan, Xue; Liu, Kecheng
2018-03-01
Network-based similarity measures have found wide applications in recommendation algorithms and made significant contributions for uncovering users' potential interests. However, existing measures are generally biased in terms of popularity, that the popular objects tend to have more common neighbours with others and thus are considered more similar to others. Such popularity bias of similarity quantification will result in the biased recommendations, with either poor accuracy or poor diversity. Based on the bipartite network modelling of the user-object interactions, this paper firstly calculates the expected number of common neighbours of two objects with given popularities in random networks. A Balanced Common Neighbour similarity index is accordingly developed by removing the random-driven common neighbours, estimated as the expected number, from the total number. Recommendation experiments in three data sets show that balancing the popularity bias in a certain degree can significantly improve the recommendations' accuracy and diversity simultaneously.
On The Calculation Of Derivatives From Digital Information
NASA Astrophysics Data System (ADS)
Pettett, Christopher G.; Budney, David R.
1982-02-01
Biomechanics analysis frequently requires cinematographic studies as a first step toward understanding the essential mechanics of a sport or exercise. In order to understand the exertion by the athlete, cinematography is used to establish the kinematics from which the energy exchanges can be considered and the equilibrium equations can be studied. Errors in the raw digital information necessitate smoothing of the data before derivatives can be obtained. Researchers employ a variety of curve-smoothing techniques including filtering and polynomial spline methods. It is essential that the researcher understands the accuracy which can be expected in velocities and accelerations obtained from smoothed digital information. This paper considers particular types of data inherent in athletic motion and the expected accuracy of calculated velocities and accelerations using typical error distributions in the raw digital information. Included in this paper are high acceleration, impact and smooth motion types of data.
Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine
2017-07-01
Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
High-precision radiometric tracking for planetary approach and encounter in the inner solar system
NASA Technical Reports Server (NTRS)
Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.
1989-01-01
The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.
Information quantity and quality affect the realistic accuracy of personality judgment.
Letzring, Tera D; Wells, Shannon M; Funder, David C
2006-07-01
Triads of unacquainted college students interacted in 1 of 5 experimental conditions that manipulated information quantity (amount of information) and information quality (relevance of information to personality), and they then made judgments of each others' personalities. To determine accuracy, the authors compared the ratings of each judge to a broad-based accuracy criterion composed of personality ratings from 3 types of knowledgeable informants (the self, real-life acquaintances, and clinician-interviewers). Results supported the hypothesis that information quantity and quality would be positively related to objective knowledge about the targets and realistic accuracy. Interjudge consensus and self-other agreement followed a similar pattern. These findings are consistent with expectations based on models of the process of accurate judgment (D. C. Funder, 1995, 1999) and consensus (D. A. Kenny, 1994). Copyright 2006 APA, all rights reserved.
Lajoie, Isabelle; Tancredi, Felipe B.; Hoge, Richard D.
2017-01-01
Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10−13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model-derived estimates were consistent regardless of the HO levels, indicating that the different effects are adequately accounted for within the model. PMID:28362834
Lajoie, Isabelle; Tancredi, Felipe B; Hoge, Richard D
2017-01-01
Recent calibrated fMRI techniques using combined hypercapnia and hyperoxia allow the mapping of resting cerebral metabolic rate of oxygen (CMRO2) in absolute units, oxygen extraction fraction (OEF) and calibration parameter M (maximum BOLD). The adoption of such technique necessitates knowledge about the precision and accuracy of the model-derived parameters. One of the factors that may impact the precision and accuracy is the level of oxygen provided during periods of hyperoxia (HO). A high level of oxygen may bring the BOLD responses closer to the maximum M value, and hence reduce the error associated with the M interpolation. However, an increased concentration of paramagnetic oxygen in the inhaled air may result in a larger susceptibility area around the frontal sinuses and nasal cavity. Additionally, a higher O2 level may generate a larger arterial blood T1 shortening, which require a bigger cerebral blood flow (CBF) T1 correction. To evaluate the impact of inspired oxygen levels on M, OEF and CMRO2 estimates, a cohort of six healthy adults underwent two different protocols: one where 60% of O2 was administered during HO (low HO or LHO) and one where 100% O2 was administered (high HO or HHO). The QUantitative O2 (QUO2) MRI approach was employed, where CBF and R2* are simultaneously acquired during periods of hypercapnia (HC) and hyperoxia, using a clinical 3 T scanner. Scan sessions were repeated to assess repeatability of results at the different O2 levels. Our T1 values during periods of hyperoxia were estimated based on an empirical ex-vivo relationship between T1 and the arterial partial pressure of O2. As expected, our T1 estimates revealed a larger T1 shortening in arterial blood when administering 100% O2 relative to 60% O2 (T1LHO = 1.56±0.01 sec vs. T1HHO = 1.47±0.01 sec, P < 4*10-13). In regard to the susceptibility artifacts, the patterns and number of affected voxels were comparable irrespective of the O2 concentration. Finally, the model-derived estimates were consistent regardless of the HO levels, indicating that the different effects are adequately accounted for within the model.
Hsu, David F C; Freese, David L; Reynolds, Paul D; Innes, Derek R; Levin, Craig S
2018-04-01
We are developing a 1-mm 3 resolution, high-sensitivity positron emission tomography (PET) system for loco-regional cancer imaging. The completed system will comprise two cm detector panels and contain 4 608 position sensitive avalanche photodiodes (PSAPDs) coupled to arrays of mm 3 LYSO crystal elements for a total of 294 912 crystal elements. For the first time, this paper summarizes the design and reports the performance of a significant portion of the final clinical PET system, comprising 1 536 PSAPDs, 98 304 crystal elements, and an active field-of-view (FOV) of cm. The sub-system performance parameters, such as energy, time, and spatial resolutions are predictive of the performance of the final system due to the modular design. Analysis of the multiplexed crystal flood histograms shows 84% of the crystal elements have>99% crystal identification accuracy. The 511 keV photopeak energy resolution was 11.34±0.06% full-width half maximum (FWHM), and coincidence timing resolution was 13.92 ± 0.01 ns FWHM at 511 keV. The spatial resolution was measured using maximum likelihood expectation maximization reconstruction of a grid of point sources suspended in warm background. The averaged resolution over the central 6 cm of the FOV is 1.01 ± 0.13 mm in the X-direction, 1.84 ± 0.20 mm in the Y-direction, and 0.84 ± 0.11 mm in the Z-direction. Quantitative analysis of acquired micro-Derenzo phantom images shows better than 1.2 mm resolution at the center of the FOV, with subsequent resolution degradation in the y-direction toward the edge of the FOV caused by limited angle tomography effects.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
NASA Astrophysics Data System (ADS)
Walker, Ryan T.
2018-04-01
A comprehensive assessment of grounding-line migration rates around Antarctica, covering a third of the coast, suggests retreat in considerable portions of the continent, beyond the rates expected from adjustment following the Last Glacial Maximum.
Quality control procedures for dynamic treatment delivery techniques involving couch motion.
Yu, Victoria Y; Fahimian, Benjamin P; Xing, Lei; Hristov, Dimitre H
2014-08-01
In this study, the authors introduce and demonstrate quality control procedures for evaluating the geometric and dosimetric fidelity of dynamic treatment delivery techniques involving treatment couch motion synchronous with gantry and multileaf collimator (MLC). Tests were designed to evaluate positional accuracy, velocity constancy and accuracy for dynamic couch motion under a realistic weight load. A test evaluating the geometric accuracy of the system in delivering treatments over complex dynamic trajectories was also devised. Custom XML scripts that control the Varian TrueBeam™ STx (Serial #3) axes in Developer Mode were written to implement the delivery sequences for the tests. Delivered dose patterns were captured with radiographic film or the electronic portal imaging device. The couch translational accuracy in dynamic treatment mode was 0.01 cm. Rotational accuracy was within 0.3°, with 0.04 cm displacement of the rotational axis. Dose intensity profiles capturing the velocity constancy and accuracy for translations and rotation exhibited standard deviation and maximum deviations below 3%. For complex delivery involving MLC and couch motions, the overall translational accuracy for reproducing programmed patterns was within 0.06 cm. The authors conclude that in Developer Mode, TrueBeam™ is capable of delivering dynamic treatment delivery techniques involving couch motion with good geometric and dosimetric fidelity.
NASA Technical Reports Server (NTRS)
Justice, C.; Townshend, J. (Principal Investigator)
1981-01-01
Two unsupervised classification procedures were applied to ratioed and unratioed LANDSAT multispectral scanner data of an area of spatially complex vegetation and terrain. An objective accuracy assessment was undertaken on each classification and comparison was made of the classification accuracies. The two unsupervised procedures use the same clustering algorithm. By on procedure the entire area is clustered and by the other a representative sample of the area is clustered and the resulting statistics are extrapolated to the remaining area using a maximum likelihood classifier. Explanation is given of the major steps in the classification procedures including image preprocessing; classification; interpretation of cluster classes; and accuracy assessment. Of the four classifications undertaken, the monocluster block approach on the unratioed data gave the highest accuracy of 80% for five coarse cover classes. This accuracy was increased to 84% by applying a 3 x 3 contextual filter to the classified image. A detailed description and partial explanation is provided for the major misclassification. The classification of the unratioed data produced higher percentage accuracies than for the ratioed data and the monocluster block approach gave higher accuracies than clustering the entire area. The moncluster block approach was additionally the most economical in terms of computing time.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
Vegetation classification of Coffea on Hawaii Island using WorldView-2 satellite imagery
NASA Astrophysics Data System (ADS)
Gaertner, Julie; Genovese, Vanessa Brooks; Potter, Christopher; Sewake, Kelvin; Manoukis, Nicholas C.
2017-10-01
Coffee is an important crop in tropical regions of the world; about 125 million people depend on coffee agriculture for their livelihoods. Understanding the spatial extent of coffee fields is useful for management and control of coffee pests such as Hypothenemus hampei and other pests that use coffee fruit as a host for immature stages such as the Mediterranean fruit fly, for economic planning, and for following changes in coffee agroecosystems over time. We present two methods for detecting Coffea arabica fields using remote sensing and geospatial technologies on WorldView-2 high-resolution spectral data of the Kona region of Hawaii Island. The first method, a pixel-based method using a maximum likelihood algorithm, attained 72% producer accuracy and 69% user accuracy (68% overall accuracy) based on analysis of 104 ground truth testing polygons. The second method, an object-based image analysis (OBIA) method, considered both spectral and textural information and improved accuracy, resulting in 76% producer accuracy and 94% user accuracy (81% overall accuracy) for the same testing areas. We conclude that the OBIA method is useful for detecting coffee fields grown in the open and use it to estimate the distribution of about 1050 hectares under coffee agriculture in the Kona region in 2012.
An analytical model with flexible accuracy for deep submicron DCVSL cells
NASA Astrophysics Data System (ADS)
Valiollahi, Sepideh; Ardeshir, Gholamreza
2018-07-01
Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.
A robot control formalism based on an information quality concept
NASA Technical Reports Server (NTRS)
Ekman, A.; Torne, A.; Stromberg, D.
1994-01-01
A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
40 CFR 89.415 - Fuel flow measurement specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement specifications... Emission Test Procedures § 89.415 Fuel flow measurement specifications. The fuel flow rate measurement instrument must have a minimum accuracy of 2 percent of the engine maximum fuel flow rate. The controlling...
Expected distributions of root-mean-square positional deviations in proteins.
Pitera, Jed W
2014-06-19
The atom positional root-mean-square deviation (RMSD) is a standard tool for comparing the similarity of two molecular structures. It is used to characterize the quality of biomolecular simulations, to cluster conformations, and as a reaction coordinate for conformational changes. This work presents an approximate analytic form for the expected distribution of RMSD values for a protein or polymer fluctuating about a stable native structure. The mean and maximum of the expected distribution are independent of chain length for long chains and linearly proportional to the average atom positional root-mean-square fluctuations (RMSF). To approximate the RMSD distribution for random-coil or unfolded ensembles, numerical distributions of RMSD were generated for ensembles of self-avoiding and non-self-avoiding random walks. In both cases, for all reference structures tested for chains more than three monomers long, the distributions have a maximum distant from the origin with a power-law dependence on chain length. The purely entropic nature of this result implies that care must be taken when interpreting stable high-RMSD regions of the free-energy landscape as "intermediates" or well-defined stable states.
Causes and Control of Corrosion in Buried-Conduit Heat Distribution Systems
1991-07-01
rubber , and foamed plastics such as polyurethanic anld phenolic) nominally contain 10 to 500 ppmn soluble chloide.’ Further, insulation can also become...pressure ratings. A maximum P X T limitation exists for all gasket materials. For example, the maximum temperature and pressure ratings for an EPDM ...ethylene propylene diene monomer) rubber material are, respectively, 3() ’F and 150 psi. The material, however, cannot be expected to perform
Tsukada, Kimiko; Hirata, Yukari; Roengpitya, Rungpat
2014-06-01
The purpose of this research was to compare the perception of Japanese vowel length contrasts by 4 groups of listeners who differed in their familiarity with length contrasts in their first language (L1; i.e., American English, Italian, Japanese, and Thai). Of the 3 nonnative groups, native Thai listeners were expected to outperform American English and Italian listeners, because vowel length is contrastive in their L1. Native Italian listeners were expected to demonstrate a higher level of accuracy for length contrasts than American English listeners, because the former are familiar with consonant (but not vowel) length contrasts (i.e., singleton vs. geminate) in their L1. A 2-alternative forced-choice AXB discrimination test that included 125 trials was administered to all the participants, and the listeners' discrimination accuracy (d') was reported. As expected, Japanese listeners were more accurate than all 3 nonnative groups in their discrimination of Japanese vowel length contrasts. The 3 nonnative groups did not differ from one another in their discrimination accuracy despite varying experience with length contrasts in their L1. Only Thai listeners were more accurate in their length discrimination when the target vowel was long than when it was short. Being familiar with vowel length contrasts in L1 may affect the listeners' cross-language perception, but it does not guarantee that their L1 experience automatically results in efficient processing of length contrasts in unfamiliar languages. The extent of success may be related to how length contrasts are phonetically implemented in listeners' L1.
The Strong Lensing Time Delay Challenge (2014)
NASA Astrophysics Data System (ADS)
Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.
2014-01-01
Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.
Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan
2018-03-01
Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration
Cohen, Jérémie F; Korevaar, Daniël A; Altman, Douglas G; Bruns, David E; Gatsonis, Constantine A; Hooft, Lotty; Irwig, Les; Levine, Deborah; Reitsma, Johannes B; de Vet, Henrica C W; Bossuyt, Patrick M M
2016-01-01
Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports. PMID:28137831
Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.
2017-01-01
Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426
Auditory Neuroscience: Temporal Anticipation Enhances Cortical Processing
Walker, Kerry M. M.; King, Andrew J.
2015-01-01
Summary A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds. PMID:21481759
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Ranking and combining multiple predictors without labeled data
Parisi, Fabio; Strino, Francesco; Nadler, Boaz; Kluger, Yuval
2014-01-01
In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. PMID:24474744
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Observational Model for Precision Astrometry with the Space Interferometry Mission
NASA Technical Reports Server (NTRS)
Turyshev, Slava G.; Milman, Mark H.
2000-01-01
The Space Interferometry Mission (SIM) is a space-based 10-m baseline Michelson optical interferometer operating in the visible waveband that is designed to achieve astrometric accuracy in the single digits of the microarcsecond domain. Over a narrow field of view SIM is expected to achieve a mission accuracy of 1 microarcsecond. In this mode SIM will search for planetary companions to nearby stars by detecting the astrometric "wobble" relative to a nearby reference star. In its wide-angle mode, SIM will provide 4 microarcsecond precision absolute position measurements of stars, with parallaxes to comparable accuracy, at the end of its 5-year mission. The expected proper motion accuracy is around 3 microarcsecond/year, corresponding to a transverse velocity of 10 m/ s at a distance of 1 kpc. The basic astrometric observable of the SIM instrument is the pathlength delay. This measurement is made by a combination of internal metrology measurements that determine the distance the starlight travels through the two arms of the interferometer, and a measurement of the white light stellar fringe to find the point of equal pathlength. Because this operation requires a non-negligible integration time, the interferometer baseline vector is not stationary over this time period, as its absolute length and orientation are time varying. This paper addresses how the time varying baseline can be "regularized" so that it may act as a single baseline vector for multiple stars, as required for the solution of the astrometric equations.
Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.
Olsen, Thomas
2007-02-01
This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjun; Li, Ruijiang; Na, Yong Hum
2014-12-15
Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracymore » of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient positioning with an approach based solely on surface information.« less
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
Development of a method of alignment between various SOLAR MAXIMUM MISSION experiments
NASA Technical Reports Server (NTRS)
1977-01-01
Results of an engineering study of the methods of alignment between various experiments for the solar maximum mission are described. The configuration studied consists of the instruments, mounts and instrument support platform located within the experiment module. Hardware design, fabrication methods and alignment techniques were studied with regard to optimizing the coalignment between the experiments and the fine sun sensor. The proposed hardware design was reviewed with regard to loads, stress, thermal distortion, alignment error budgets, fabrication techniques, alignment techniques and producibility. Methods of achieving comparable alignment accuracies on previous projects were also reviewed.
A 3D approximate maximum likelihood localization solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-09-23
A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with acoustic transmitters and vocalizing marine mammals to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives and support Marine Renewable Energy. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
NASA Astrophysics Data System (ADS)
Chen, B.; Su, J. H.; Guo, L.; Chen, J.
2017-06-01
This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.
Mapping grass communities based on multi-temporal Landsat TM imagery and environmental variables
NASA Astrophysics Data System (ADS)
Zeng, Yuandi; Liu, Yanfang; Liu, Yaolin; de Leeuw, Jan
2007-06-01
Information on the spatial distribution of grass communities in wetland is increasingly recognized as important for effective wetland management and biological conservation. Remote sensing techniques has been proved to be an effective alternative to intensive and costly ground surveys for mapping grass community. However, the mapping accuracy of grass communities in wetland is still not preferable. The aim of this paper is to develop an effective method to map grass communities in Poyang Lake Natural Reserve. Through statistic analysis, elevation is selected as an environmental variable for its high relationship with the distribution of grass communities; NDVI stacked from images of different months was used to generate Carex community map; the image in October was used to discriminate Miscanthus and Cynodon communities. Classifications were firstly performed with maximum likelihood classifier using single date satellite image with and without elevation; then layered classifications were performed using multi-temporal satellite imagery and elevation with maximum likelihood classifier, decision tree and artificial neural network separately. The results show that environmental variables can improve the mapping accuracy; and the classification with multitemporal imagery and elevation is significantly better than that with single date image and elevation (p=0.001). Besides, maximum likelihood (a=92.71%, k=0.90) and artificial neural network (a=94.79%, k=0.93) perform significantly better than decision tree (a=86.46%, k=0.83).
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
Nanodroplets Impact on Rough Surfaces: A Simulation and Theoretical Study.
Gao, Shan; Liao, Quanwen; Liu, Wei; Liu, Zhichun
2018-05-22
Impact of droplets is widespread in life, and modulating the dynamics of impinging droplets is a significant problem in production. However, on textured surfaces, the micromorphologic change and mechanism of impinging nanodroplets are not well-understood; furthermore, the accuracy of the theoretical model for nanodroplets needs to be improved. Here, considering the great challenge of conducting experiments on nanodroplets, a molecular dynamics simulation is performed to visualize the impact process of nanodroplets on nanopillar surfaces. Compared with macroscale droplets, apart from the similar relation of restitution coefficient with the Weber number, we found some distinctive results: the maximum spreading time is described as a power law of impact velocity, and the relation of maximum spreading factor with impact velocity or the Reynolds number is exponential. Moreover, the roughness of substrates plays a prominent role in the dynamics of impact nanodroplets, and on surfaces with lower solid fraction, the lower attraction force induces an easier rebound of impact nanodroplets. At last, on the basis of the energy balance, through modifying the estimation of viscous dissipation and surface energy terms, we proposed an improved model for the maximum spreading factor, which shows greater accuracy for nanodroplets, especially in the low-to-moderate velocity range. The outcome of this study demonstrates that a distinctive dynamical behavior of impinging nanodroplets, the fundamental insight, and more accurate prediction are very useful in the improvement of the hydrodynamic behavior of the nanodroplets.
The effects of spatial sampling choices on MR temperature measurements.
Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L
2011-02-01
The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.
Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah
2017-01-01
Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.
Shareef, Hussain; Mohamed, Azah
2017-01-01
Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability. PMID:28702051
Investigation of Portevin-Le Chatelier effect in 5456 Al-based alloy using digital image correlation
NASA Astrophysics Data System (ADS)
Cheng, Teng; Xu, Xiaohai; Cai, Yulong; Fu, Shihua; Gao, Yue; Su, Yong; Zhang, Yong; Zhang, Qingchuan
2015-02-01
A variety of experimental methods have been proposed for Portevin-Le Chatelier (PLC) effect. They mainly focused on the in-plane deformation. In order to achieve the high-accuracy measurement, three-dimensional digital image correlation (3D-DIC) was employed in this work to investigate the PLC effect in 5456 Al-based alloy. The temporal and spatial evolutions of deformation in the full field of specimen surface were observed. The large deformation of localized necking was determined experimentally. The distributions of out-of-plane displacement over the loading procedure were also obtained. Furthermore, a comparison of measurement accuracy between two-dimensional digital image correlation (2D-DIC) and 3D-DIC was also performed. Due to the theoretical restriction, the measurement accuracy of 2D-DIC decreases with the increase of deformation. A maximum discrepancy of about 20% with 3D-DIC was observed in this work. Therefore, 3D-DIC is actually more essential for the high-accuracy investigation of PLC effect.
Improved Topographic Mapping Through Multi-Baseline SAR Interferometry with MAP Estimation
NASA Astrophysics Data System (ADS)
Dong, Yuting; Jiang, Houjun; Zhang, Lu; Liao, Mingsheng; Shi, Xuguo
2015-05-01
There is an inherent contradiction between the sensitivity of height measurement and the accuracy of phase unwrapping for SAR interferometry (InSAR) over rough terrain. This contradiction can be resolved by multi-baseline InSAR analysis, which exploits multiple phase observations with different normal baselines to improve phase unwrapping accuracy, or even avoid phase unwrapping. In this paper we propose a maximum a posteriori (MAP) estimation method assisted by SRTM DEM data for multi-baseline InSAR topographic mapping. Based on our method, a data processing flow is established and applied in processing multi-baseline ALOS/PALSAR dataset. The accuracy of resultant DEMs is evaluated by using a standard Chinese national DEM of scale 1:10,000 as reference. The results show that multi-baseline InSAR can improve DEM accuracy compared with single-baseline case. It is noteworthy that phase unwrapping is avoided and the quality of multi-baseline InSAR DEM can meet the DTED-2 standard.
Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model
NASA Astrophysics Data System (ADS)
Wang, Qijie
2015-08-01
The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Towards SSVEP-based, portable, responsive Brain-Computer Interface.
Kaczmarek, Piotr; Salomon, Pawel
2015-08-01
A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.
NASA Astrophysics Data System (ADS)
Luyckx, G.; Degrieck, J.; De Waele, W.; Van Paepegem, W.; Van Roosbroeck, J.; Chah, K.; Vlekken, J.; McKenzie, I.; Obst, A.
2017-11-01
A fibre optic sensor design is proposed for simultaneously measuring the 3D stress (or strain) components and temperature inside thermo hardened composite materials. The sensor is based on two fibre Bragg gratings written in polarisation maintaining fibre. Based on calculations of the condition number, it will be shown that reasonable accuracies are to be expected. First tests on the bare sensors and on the sensors embedded in composite material, which confirm the expected behaviour, will be presented.
X-ray observations of the burst source MXB 1728 - 34
NASA Technical Reports Server (NTRS)
Basinska, E. M.; Lewin, W. H. G.; Sztajno, M.; Cominsky, L. R.; Marshall, F. J.
1984-01-01
Where sufficient information has been obtained, attention is given to the maximum burst flux, integrated burst flux, spectral hardness, rise time, etc., of 96 X-ray bursts observed from March 1976 to March 1979. The integrated burst flux and the burst frequency appear to be correlated; the longer the burst interval, the larger the integrated burst flux, as expected on the basis of simple thermonuclear flash models. The maximum burst flux and the integrated burst flux are strongly correlated; for low flux levels their dependence is approximately linear, while for increasing values of the integrated burst flux, the flux at burst maximum saturates and reaches a plateau.
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner
2013-01-01
Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.
NASA Astrophysics Data System (ADS)
Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki
2015-05-01
The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3 ± 1.0 and 3.3 ± 3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.
The BepiColombo Laser Altimeter (BELA): Scientific Performance at Mercury
NASA Astrophysics Data System (ADS)
Hussmann, H.; Steinbrügge, G.; Stark, A.; Oberst, J.; Thomas, N.; Lara, L.-M.
2018-05-01
We discuss the expected scientific performance of BELA in Mercury orbit. Based on a performance model, we present the measurement accuracy of global and local topography, surface slopes and roughness, as well as the tidal Love number h2.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers
Thompson, Clarissa A.; Opfer, John E.
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.
Thompson, Clarissa A; Opfer, John E
2016-01-01
Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
Trends in 1970-2010 southern California surface maximum temperatures: extremes and heat waves
NASA Astrophysics Data System (ADS)
Ghebreegziabher, Amanuel T.
Daily maximum temperatures from 1970-2010 were obtained from the National Climatic Data Center (NCDC) for 28 South Coast Air Basin (SoCAB) Cooperative Network (COOP) sites. Analyses were carried out on the entire data set, as well as on the 1970-1974 and 2006-2010 sub-periods, including construction of spatial distributions and time-series trends of both summer-average and annual-maximum values and of the frequency of two and four consecutive "daytime" heat wave events. Spatial patterns of average and extreme values showed three areas consistent with climatological SoCAB flow patterns: cold coastal, warm inland low-elevation, and cool further-inland mountain top. Difference (2006-2010 minus 1970-1974) distributions of both average and extreme-value trends were consistent with the shorter period (1970-2005) study of previous study, as they showed the expected inland regional warming and a "reverse-reaction" cooling in low elevation coastal and inland areas open to increasing sea breeze flows. Annual-extreme trends generally showed cooling at sites below 600 m and warming at higher elevations. As the warming trends of the extremes were larger than those of the averages, regional warming thus impacts extremes more than averages. Spatial distributions of hot-day frequencies showed expected maximum at inland low-elevation sites. Regional warming again thus induced increases at both elevated-coastal areas, but low-elevation areas showed reverse-reaction decreases.
The Spotlight of Attention Illuminates Failed Feature-based Expectancies
Bengson, Jesse J.; Lopez-Calderon, Javier; Mangun, George R.
2012-01-01
A well-replicated finding is that visual stimuli presented at an attended location are afforded a processing benefit in the form of speeded reaction times and increased accuracy (Posner, 1979; Mangun 1995). This effect has been described using a spotlight metaphor, in which all stimuli within the focus of spatial attention receive facilitated processing, irrespective of other stimulus parameters. However, the spotlight metaphor has been brought into question by a series of combined expectancy studies which demonstrated that the behavioral benefits of spatial attention are contingent upon secondary feature-based expectancies (Kingstone, 1992). The present work used an event-related potential (ERP) approach to reveal that the early neural signature of the spotlight of spatial attention is not sensitive to the validity of secondary feature-based expectancies. PMID:22775503
NASA Astrophysics Data System (ADS)
Melis, Nikolaos S.; Barberopoulou, Aggeliki; Frentzos, Elias; Krassanakis, Vassilios
2016-04-01
A scenario based methodology for tsunami hazard assessment is used, by incorporating earthquake sources with the potential to produce extreme tsunamis (measured through their capacity to cause maximum wave height and inundation extent). In the present study we follow a two phase approach. In the first phase, existing earthquake hazard zoning in the greater Aegean region is used to derive representative maximum expected earthquake magnitude events, with realistic seismotectonic source characteristics, and of greatest tsunamigenic potential within each zone. By stacking the scenario produced maximum wave heights a global maximum map is constructed for the entire Hellenic coastline, corresponding to all expected extreme offshore earthquake sources. Further evaluation of the produced coastline categories based on the maximum expected wave heights emphasizes the tsunami hazard in selected coastal zones with important functions (i.e. touristic crowded zones, industrial zones, airports, power plants etc). Owing to its proximity to the Hellenic Arc, many urban centres and being a popular tourist destination, Crete Island and the South Aegean region are given a top priority to define extreme inundation zoning. In the second phase, a set of four large coastal cities (Kalamata, Chania, Heraklion and Rethymno), important for tsunami hazard, due i.e. to the crowded beaches during the summer season or industrial facilities, are explored towards preparedness and resilience for tsunami hazard in Greece. To simulate tsunamis in the Aegean region (generation, propagation and runup) the MOST - ComMIT NOAA code was used. High resolution DEMs for bathymetry and topography were joined via an interface, specifically developed for the inundation maps in this study and with similar products in mind. For the examples explored in the present study, we used 5m resolution for the topography and 30m resolution for the bathymetry, respectively. Although this study can be considered as preliminary, it can also form the basis to further develop a scenario based inundation model database that can be used as an operational tool, for fast assessing tsunami prone zones during a real tsunami crisis.
NASA Technical Reports Server (NTRS)
Hertel, Heinrich
1930-01-01
This report is intended to furnish bases for load assumptions in the designing of airplane controls. The maximum control forces and quickness of operation are determined. The maximum forces for a strong pilot with normal arrangement of the controls is taken as 1.25 times the mean value obtained from tests with twelve persons. Tests with a number of persons were expected to show the maximum forces that a man of average strength can exert on the control stick in operating the elevator and ailerons and also on the rudder bar. The effect of fatigue, of duration and of the nature (static or dynamic) of the force, as also the condition of the test subject (with or without belt) were also considered.
Parametric study of the Orbiter rollout using an approximate solution
NASA Technical Reports Server (NTRS)
Garland, B. J.
1979-01-01
An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.
Beukinga, Roelof J; Hulshoff, Jan Binne; Mul, Véronique E M; Noordzij, Walter; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Plukker, John T M
2018-06-01
Purpose To assess the value of baseline and restaging fluorine 18 ( 18 F) fluorodeoxyglucose (FDG) positron emission tomography (PET) radiomics in predicting pathologic complete response to neoadjuvant chemotherapy and radiation therapy (NCRT) in patients with locally advanced esophageal cancer. Materials and Methods In this retrospective study, 73 patients with histologic analysis-confirmed T1/N1-3/M0 or T2-4a/N0-3/M0 esophageal cancer were treated with NCRT followed by surgery (Chemoradiotherapy for Esophageal Cancer followed by Surgery Study regimen) between October 2014 and August 2017. Clinical variables and radiomic features from baseline and restaging 18 F-FDG PET were selected by univariable logistic regression and least absolute shrinkage and selection operator. The selected variables were used to fit a multivariable logistic regression model, which was internally validated by using bootstrap resampling with 20 000 replicates. The performance of this model was compared with reference prediction models composed of maximum standardized uptake value metrics, clinical variables, and maximum standardized uptake value at baseline NCRT radiomic features. Outcome was defined as complete versus incomplete pathologic response (tumor regression grade 1 vs 2-5 according to the Mandard classification). Results Pathologic response was complete in 16 patients (21.9%) and incomplete in 57 patients (78.1%). A prediction model combining clinical T-stage and restaging NCRT (post-NCRT) joint maximum (quantifying image orderliness) yielded an optimism-corrected area under the receiver operating characteristics curve of 0.81. Post-NCRT joint maximum was replaceable with five other redundant post-NCRT radiomic features that provided equal model performance. All reference prediction models exhibited substantially lower discriminatory accuracy. Conclusion The combination of clinical T-staging and quantitative assessment of post-NCRT 18 F-FDG PET orderliness (joint maximum) provided high discriminatory accuracy in predicting pathologic complete response in patients with esophageal cancer. © RSNA, 2018 Online supplemental material is available for this article.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Selective attention increases choice certainty in human decision making.
Zizlsperger, Leopold; Sauvigny, Thomas; Haarmeier, Thomas
2012-01-01
Choice certainty is a probabilistic estimate of past performance and expected outcome. In perceptual decisions the degree of confidence correlates closely with choice accuracy and reaction times, suggesting an intimate relationship to objective performance. Here we show that spatial and feature-based attention increase human subjects' certainty more than accuracy in visual motion discrimination tasks. Our findings demonstrate for the first time a dissociation of choice accuracy and certainty with a significantly stronger influence of voluntary top-down attention on subjective performance measures than on objective performance. These results reveal a so far unknown mechanism of the selection process implemented by attention and suggest a unique biological valence of choice certainty beyond a faithful reflection of the decision process.
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations
A minimax technique for time-domain design of preset digital equalizers using linear programming
NASA Technical Reports Server (NTRS)
Vaughn, G. L.; Houts, R. C.
1975-01-01
A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.
1991-01-01
The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
ERIC Educational Resources Information Center
Cockburn, Stewart
1969-01-01
The basic requirements of all good prose are clarity, accuracy, brevity, and simplicity. Especially in public prose--in which the meaning is the crux of the article or speech--concise, vigorous English demands a minimum of adjectives, a maximum use of the active voice, nouns carefully chosen, a logical argument with no labored or obscure points,…
Influence of Emotion on the Control of Low-Level Force Production
ERIC Educational Resources Information Center
Naugle, Kelly M.; Coombes, Stephen A.; Cauraugh, James H.; Janelle, Christopher M.
2012-01-01
The accuracy and variability of a sustained low-level force contraction (2% of maximum voluntary contraction) was measured while participants viewed unpleasant, pleasant, and neutral images during a feedback occluded force control task. Exposure to pleasant and unpleasant images led to a relative increase in force production but did not alter the…
Advanced Instrumentation for Positron Emission Tomography [PET
DOE R&D Accomplishments Database
Derenzo, S. E.; Budinger, T. F.
1985-04-01
This paper summarizes the physical processes and medical science goals that underlay modern instrumentation design for Positron Emission Tomography. The paper discusses design factors such as detector material, crystalphototube coupling, shielding geometry, sampling motion, electronics design, time-of-flight, and the interrelationships with quantitative accuracy, spatial resolution, temporal resolution, maximum data rates, and cost.
40 CFR 205.174 - Remedial orders.
Code of Federal Regulations, 2010 CFR
2010-07-01
... power (maximum rated RPM) is developed; and (B) Response characteristics such that, when closing RPM is...-state accuracy of within ±10% at 20 km/h (12.4 mph). (5) A microphone wind screen which does not affect... microphone wind screen must be used. The sound level meter must be calibrated with the acoustic calibrator as...
Error analysis of real time and post processed or bit determination of GFO using GPS tracking
NASA Technical Reports Server (NTRS)
Schreiner, William S.
1991-01-01
The goal of the Navy's GEOSAT Follow-On (GFO) mission is to map the topography of the world's oceans in both real time (operational) and post processed modes. Currently, the best candidate for supplying the required orbit accuracy is the Global Positioning System (GPS). The purpose of this fellowship was to determine the expected orbit accuracy for GFO in both the real time and post-processed modes when using GPS tracking. This report presents the work completed through the ending date of the fellowship.
Varying the valuating function and the presentable bank in computerized adaptive testing.
Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio
2011-05-01
In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.
Gao, Yu-Fei; Li, Bi-Qing; Cai, Yu-Dong; Feng, Kai-Yan; Li, Zhan-Dong; Jiang, Yang
2013-01-27
Identification of catalytic residues plays a key role in understanding how enzymes work. Although numerous computational methods have been developed to predict catalytic residues and active sites, the prediction accuracy remains relatively low with high false positives. In this work, we developed a novel predictor based on the Random Forest algorithm (RF) aided by the maximum relevance minimum redundancy (mRMR) method and incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility to predict active sites of enzymes and achieved an overall accuracy of 0.885687 and MCC of 0.689226 on an independent test dataset. Feature analysis showed that every category of the features except disorder contributed to the identification of active sites. It was also shown via the site-specific feature analysis that the features derived from the active site itself contributed most to the active site determination. Our prediction method may become a useful tool for identifying the active sites and the key features identified by the paper may provide valuable insights into the mechanism of catalysis.
NASA Astrophysics Data System (ADS)
Yazdani, Mohammad Reza; Setayeshi, Saeed; Arabalibeik, Hossein; Akbari, Mohammad Esmaeil
2017-05-01
Intraoperative electron radiation therapy (IOERT), which uses electron beams for irradiating the target directly during the surgery, has the advantage of delivering a homogeneous dose to a controlled layer of tissue. Since the dose falls off quickly below the target thickness, the underlying normal tissues are spared. In selecting the appropriate electron energy, the accuracy of the target tissue thickness measurement is critical. In contrast to other procedures applied in IOERT, the routine measurement method is considered to be completely traditional and approximate. In this work, a novel mechanism is proposed for measuring the target tissue thickness with an acceptable level of accuracy. An electronic system has been designed and manufactured with the capability of measuring the tissue thickness based on the recorded electron density under the target. The results indicated the possibility of thickness measurement with a maximum error of 2 mm for 91.35% of data. Aside from system limitation in estimating the thickness of 5 mm phantom, for 88.94% of data, maximum error is 1 mm.
Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.
Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto
2010-05-01
A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
NASA Astrophysics Data System (ADS)
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Seo, Jeong-Woo; Kang, Dong-Won; Kim, Ju-Young; Yang, Seung-Tae; Kim, Dae-Hyeok; Choi, Jin-Seung; Tack, Gye-Rae
2014-01-01
In this study, the accuracy of the inputs required for finite element analysis, which is mainly used for the biomechanical analysis of bones, was improved. To ensure a muscle force and joint contact force similar to the actual values, a musculoskeletal model that was based on the actual gait experiment was used. Gait data were obtained from a healthy male adult aged 29 who had no history of musculoskeletal disease and walked normally (171 cm height and 72 kg weight), and were used as inputs for the musculoskeletal model simulation to determine the muscle force and joint contact force. Among the phases of gait, which is the most common activity in daily life, the stance phase is the most affected by the load. The results data were extracted from five events in the stance phase: heel contact (ST1), loading response (ST2), early mid-stance (ST2), late mid-stance (ST4), and terminal stance (ST5). The results were used as the inputs for the finite element model that was formed using 1.5mm intervals computed tomography (CT) images and the maximum Von-Mises stress and the maximum Von-Mises strain of the right femur were examined. The maximum stress and strain were lowest at the ST4. The maximum values for the femur occurred in the medial part and then in the lateral part after the mid-stance. In this study, the results of the musculoskeletal model simulation using the inverse-dynamic analysis were utilized to improve the accuracy of the inputs, which affected the finite element analysis results, and the possibility of the bone-specific analysis according to the lapse of time was examined.
Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich
2006-10-01
The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
NASA Astrophysics Data System (ADS)
Takagi, Hiroshi; Wu, Wenjie
2016-03-01
Even though the maximum wind radius (R
Legal Professionals' Knowledge of Eyewitness Testimony in China: A Cross-Sectional Survey
Jiang, Lina; Luo, Dahua
2016-01-01
Purpose To examine legal professionals’ knowledge of a wide range of factors that affect eyewitness accuracy in China. Methods A total of 812 participants, including 210 judges, 244 prosecutors, 202 police officers, and 156 defense attorneys, were asked to respond to 12 statements about eyewitness testimony and 3 basic demographic questions (i.e., gender, age, and prior experience). Results Although the judges and the defense attorneys had a somewhat higher number of correct responses than the other two groups, all groups showed limited knowledge of eyewitness testimony. In addition, the participants’ responses to only four items (i.e., weapon focus, attitude and expectations, child suggestibility, and the impact of stress) were roughly unanimous within the four legal professional groups. Legal professionals’ gender showed no significant correlations with their knowledge of eyewitness testimony. Prior experiences were significantly and negatively correlated with the item on the knowledge of forgetting curve among judges but positively correlated with two items (i.e., attitudes and exposure time) among defense attorneys and with 4 statements (i.e., the knowledge of attitudes and expectations, impact of stress, child witness accuracy, and exposure time) among prosecutors. Conclusions The findings suggest that knowledge of the factors that influence eyewitness accuracy must be more effectively communicated to legal professionals in the future. PMID:26828933
Klink, P Christiaan; Jeurissen, Danique; Theeuwes, Jan; Denys, Damiaan; Roelfsema, Pieter R
2017-08-22
The richness of sensory input dictates that the brain must prioritize and select information for further processing and storage in working memory. Stimulus salience and reward expectations influence this prioritization but their relative contributions and underlying mechanisms are poorly understood. Here we investigate how the quality of working memory for multiple stimuli is determined by priority during encoding and later memory phases. Selective attention could, for instance, act as the primary gating mechanism when stimuli are still visible. Alternatively, observers might still be able to shift priorities across memories during maintenance or retrieval. To distinguish between these possibilities, we investigated how and when reward cues determine working memory accuracy and found that they were only effective during memory encoding. Previously learned, but currently non-predictive, color-reward associations had a similar influence, which gradually weakened without reinforcement. Finally, we show that bottom-up salience, manipulated through varying stimulus contrast, influences memory accuracy during encoding with a fundamentally different time-course than top-down reward cues. While reward-based effects required long stimulus presentation, the influence of contrast was strongest with brief presentations. Our results demonstrate how memory resources are distributed over memory targets and implicates selective attention as a main gating mechanism between sensory and memory systems.
NASA Astrophysics Data System (ADS)
Wang, Yuebing
2017-04-01
Based on the observation data of Compass/GPSobserved at five stations, time span from July 1, 2014 to June 30, 2016. UsingPPP positioning model of the PANDA software developed by Wuhan University,Analyzedthe positioning accuracy of single system and Compass/GPS integrated resolving, and discussed the capability of Compass navigation system in crustal motion monitoring. The results showed that the positioning accuracy in the east-west directionof the Compass navigation system is lower than the north-south direction (the positioning accuracy de 3 times RMS), in general, the positioning accuracyin the horizontal direction is about 1 2cm and the vertical direction is about 5 6cm. The GPS positioning accuracy in the horizontal direction is better than 1cm and the vertical direction is about 1 2cm. The accuracy of Compass/GPS integrated resolving is quite to GPS. It is worth mentioning that although Compass navigation system precision point positioning accuracy is lower than GPS, two sets of velocity fields obtained by using the Nikolaidis (2002) model to analyze the Compass and GPS time series results respectively, the results showed that the maximum difference of the two sets of velocity field in horizontal directions is 1.8mm/a. The Compass navigation system can now be used to monitor the crustal movement of the large deformation area, based on the velocity field in horizontal direction.
Apparatus for Teaching Physics: A Very Short, Portable Foucault Pendulum.
ERIC Educational Resources Information Center
Kruglak, Haym
1983-01-01
Describes the construction of a small (portable), inexpensive, and easy to build Foucault pendulum. Includes photographs of the apparatus, a schematic of the electrical circuit used, and discussion of the amount of accuracy to be expected during classroom investigation. (JM)
ERIC Educational Resources Information Center
Arnold, L. Eugene
1973-01-01
The hyperkinetic label can impair self-image and create negative expectation. It should be used only with confidence in its accuracy and when its use confers help not otherwise available. Guidelines for appropriate diagnosis include prevalence considerations, teacher reports, extent of home difficulties, psychological testing and clinical…
What Top Management Expects from the Communicator.
ERIC Educational Resources Information Center
Fegley, Robert L.
Top corporate management requires communications departments that maintain credibility with the public by developing the following qualities: integrity established through consistent and honest messages; accuracy based on solid research; authority derived from an understanding of the subject and from drawing on appropriate expertise; a…
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
Classification of right-hand grasp movement based on EMOTIV Epoc+
NASA Astrophysics Data System (ADS)
Tobing, T. A. M. L.; Prawito, Wijaya, S. K.
2017-07-01
Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.
Martial arts striking hand peak acceleration, accuracy and consistency.
Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A
2013-01-01
The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.
Genetic determination of height-mediated mate choice.
Tenesa, Albert; Rawlik, Konrad; Navarro, Pau; Canela-Xandri, Oriol
2016-01-19
Numerous studies have reported positive correlations among couples for height. This suggests that humans find individuals of similar height attractive. However, the answer to whether the choice of a mate with a similar phenotype is genetically or environmentally determined has been elusive. Here we provide an estimate of the genetic contribution to height choice in mates in 13,068 genotyped couples. Using a mixed linear model we show that 4.1% of the variation in the mate height choice is determined by a person's own genotype, as expected in a model where one's height determines the choice of mate height. Furthermore, the genotype of an individual predicts their partners' height in an independent dataset of 15,437 individuals with 13% accuracy, which is 64% of the theoretical maximum achievable with a heritability of 0.041. Theoretical predictions suggest that approximately 5% of the heritability of height is due to the positive covariance between allelic effects at different loci, which is caused by assortative mating. Hence, the coupling of alleles with similar effects could substantially contribute to the missing heritability of height. These estimates provide new insight into the mechanisms that govern mate choice in humans and warrant the search for the genetic causes of choice of mate height. They have important methodological implications and contribute to the missing heritability debate.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Optimal Multiple Surface Segmentation With Shape and Context Priors
Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong
2014-01-01
Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309
Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Flow in cerebral aneurysms: 4D Flow MRI measurements and CFD models
NASA Astrophysics Data System (ADS)
Rayz, Vitaliy; Schnell, Susanne
2017-11-01
4D Flow MRI is capable of measuring blood flow in vivo, providing time-resolved velocity fields in 3D. The dynamic range of the 4D Flow MRI is determined by a velocity sensitivity parameter (venc), set above the expected maximum velocity, which can result in noisy data for slow flow regions. A dual-venc 4D flow MRI technique, where both low- and high-venc data are acquired, can improve velocity-to-noise ratio and, therefore, quantification of clinically-relevant hemodynamic metrics. In this study, patient-specific CFD simulations were used to evaluate the advantages of the dual-venc approach for assessment of the flow in cerebral aneurysms. The flow in 2 cerebral aneurysms was measured in vivo with dual-venc 4D Flow MRI and simulated with CFD, using the MRI data to prescribe flow boundary conditions. The flow fields obtained with computations were compared to those measured with a single- and dual-venc 4D Flow MRI. The numerical models resolved small flow structures near the aneurysmal wall, that were not detected with a single-venc acquisition. Comparison of the numerical and imaging results shows that the dual-venc approach can improve the accuracy of the 4D Flow MRI measurements in regions characterized by high-velocity jets and slow recirculating flows.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Context-aware adaptive spelling in motor imagery BCI
NASA Astrophysics Data System (ADS)
Perdikis, S.; Leeb, R.; Millán, J. d. R.
2016-06-01
Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Context-aware adaptive spelling in motor imagery BCI.
Perdikis, S; Leeb, R; Millán, J D R
2016-06-01
This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.
Guida, Pietro; Mastro, Florinda; Scrascia, Giuseppe; Whitlock, Richard; Paparella, Domenico
2014-12-01
A systematic review of the European System for Cardiac Operative Risk Evaluation (euroSCORE) II performance for prediction of operative mortality after cardiac surgery has not been performed. We conducted a meta-analysis of studies based on the predictive accuracy of the euroSCORE II. We searched the Embase and PubMed databases for all English-only articles reporting performance characteristics of the euroSCORE II. The area under the receiver operating characteristic curve, the observed/expected mortality ratio, and observed-expected mortality difference with their 95% confidence intervals were analyzed. Twenty-two articles were selected, including 145,592 procedures. Operative mortality occurred in 4293 (2.95%), whereas the expected events according to euroSCORE II were 4802 (3.30%). Meta-analysis of these studies provided an area under the receiver operating characteristic curve of 0.792 (95% confidence interval, 0.773-0.811), an estimated observed/expected ratio of 1.019 (95% confidence interval, 0.899-1.139), and observed-expected difference of 0.125 (95% confidence interval, -0.269 to 0.519). Statistical heterogeneity was detected among retrospective studies including less recent procedures. Subgroups analysis confirmed the robustness of combined estimates for isolated valve procedures and those combined with revascularization surgery. A significant overestimation of the euroSCORE II with an observed/expected ratio of 0.829 (95% confidence interval, 0.677-0.982) was observed in isolated coronary artery bypass grafting and a slight underestimation of predictions in high-risk patients (observed/expected ratio 1.253 and observed-expected difference 1.859). Despite the heterogeneity, the results from this meta-analysis show a good overall performance of the euroSCORE II in terms of discrimination and accuracy of model predictions for operative mortality. Validation of the euroSCORE II in prospective populations needs to be further studied for a continuous improvement of patients' risk stratification before cardiac surgery. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
Pregnancy intentions among expectant adolescent couples.
Lewin, Amy; Mitchell, Stephanie J; Hodgkinson, Stacy; Gilmore, Jasmine; Beers, Lee S
2014-06-01
To examine the self-reported pregnancy intentions of the male partners of expectant adolescent mothers, the accuracy of adolescent mothers' perceptions of their partner's pregnancy intentions, and the concordance between young mothers' and fathers' pregnancy intentions. This cross-sectional pilot study collected interview data from expectant adolescent mothers and their male partners. Data were collected in participants' homes. 35 expectant couples were interviewed separately. Most participants were African American (89% of mothers, 74% of fathers). 69% of mothers were 17-18 years old, and half of the fathers were ≥19. Parents responded to survey questions adapted from the Center for Disease Control and Prevention's Pregnancy Risk Assessment Monitoring System Questionnaire. 44% of fathers reported wanting their partner to get pregnant. Another 15% were ambivalent. A kappa statistic of 0.12 (P = .33) indicated very little "accuracy" of mothers' perceptions of their partners' pregnancy intentions. Further, there was low concordance between the pregnancy intentions of mothers and fathers. Young fathers who wanted or were ambivalent about pregnancy were significantly more likely to use no contraception or withdrawal. For a notable number of minority couples, adolescent mothers do not have an accurate perception of their partners' pregnancy intentions and use contraceptive methods that are not within their control. These findings indicate that teen pregnancy prevention interventions must target young males in addition to females and sexually active adolescents should be encouraged to discuss pregnancy intentions with each other. Copyright © 2014 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Additive benefits of autonomy support and enhanced expectancies for motor learning.
Wulf, Gabriele; Chiviacowsky, Suzete; Cardozo, Priscila Lopes
2014-10-01
Two factors that have been shown to facilitate motor learning are autonomy support (AS) and enhanced expectancies (EE) for performance. We examined the individual and combined influences of these factors. In a 2 × 2 design, participants learning a novel motor skill (throwing with the non-dominant arm) were or were not provided a choice (AS) about the ball color on each of 6 10-trial blocks during practice, and were or were not given bogus positive social-comparative feedback (EE). This resulted in four groups: AS/EE, AS, EE, and C (control). One day after the practice phase, participants completed 10 retention and 10 transfer trials. The distance to the target--a bull's eye with a 1m radius and 10 concentric circles--was 7.5m during practice and retention, and 8.5m during transfer. Autonomy support and enhanced expectancies had additive advantages for learning, with both main effects being significant for retention and transfer. On both tests, the AS/EE group showed the greatest throwing accuracy. Also, the accuracy scores of the AS and EE groups were higher than those of the C group. Furthermore, self-efficacy measured after practice and before retention and transfer was increased by both AS and EE. Thus, supporting learners' need for autonomy by given them a small choice--even though it was not directly related to task performance--and enhancing their performance expectancies appeared to independently influence learning. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
El-Zoghby, Helmy M.; Bendary, Ahmed F.
2016-10-01
Maximum Power Point Tracking (MPPT) is now widely used method in increasing the photovoltaic (PV) efficiency. The conventional MPPT methods have many problems concerning the accuracy, flexibility and efficiency. The MPP depends on the PV temperature and solar irradiation that randomly varied. In this paper an artificial intelligence based controller is presented through implementing of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to obtain maximum power from PV. The ANFIS inputs are the temperature and cell current, and the output is optimal voltage at maximum power. During operation the trained ANFIS senses the PV current using suitable sensor and also senses the temperature to determine the optimal operating voltage that corresponds to the current at MPP. This voltage is used to control the boost converter duty cycle. The MATLAB simulation results shows the effectiveness of the ANFIS with sensing the PV current in obtaining the MPPT from the PV.
Use of collateral information to improve LANDSAT classification accuracies
NASA Technical Reports Server (NTRS)
Strahler, A. H. (Principal Investigator)
1981-01-01
Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.
Autonomous satellite navigation with the Global Positioning System
NASA Technical Reports Server (NTRS)
Fuchs, A. J.; Wooden, W. H., II; Long, A. C.
1977-01-01
This paper discusses the potential of using the Global Positioning System (GPS) to provide autonomous navigation capability to NASA satellites in the 1980 era. Some of the driving forces motivating autonomous navigation are presented. These include such factors as advances in attitude control systems, onboard science annotation, and onboard gridding of imaging data. Simulation results which demonstrate baseline orbit determination accuracies using GPS data on Seasat, Landsat-D, and the Solar Maximum Mission are presented. Emphasis is placed on identifying error sources such as GPS time, GPS ephemeris, user timing biases, and user orbit dynamics, and in a parametric sense on evaluating their contribution to the orbit determination accuracies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Garduno, O. A.; Larraga-Gutierrez, J. M.; Celis, M. A.
2006-09-08
An acrylic phantom was designed and constructed to assess the geometrical accuracy of CT, MRI and PET images for stereotactic radiotherapy (SRT) and radiosurgery (SRS) applications. The phantom was suited for each image modality with a specific tracer and compared with CT images to measure the radial deviation between the reference marks in the phantom. It was found that for MRI the maximum mean deviation is 1.9 {+-} 0.2 mm compared to 2.4 {+-} 0.3 mm reported for PET. These results will be used for margin outlining in SRS and SRT treatment planning.
Is the Lamb shift chemically significant?
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Bauschlicher, Charles W., Jr.; Schwenke, David W.; Pyykko, Pekka; Arnold, James (Technical Monitor)
2001-01-01
The contribution of the Lamb shift to the atomization energies of some prototype molecules, BF3, AlF3, and GaF3, is estimated by a perturbation procedure. It is found to be in the range of 3-5% of the one-electron scalar relativistic contribution to the atomization energy. The maximum absolute value is 0.2 kcal/mol for GaF3. These sample calculations indicate that the Lamb shift is probably small enough to be neglected for energetics of molecules containing light atoms if the target accuracy is 1 kcal/mol, but for higher accuracy calculations and for molecules containing heavy elements it must be considered.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
NASA Astrophysics Data System (ADS)
Hassan, Mahmoud A.
2004-02-01
Digital elevation models (DEMs) are important tools in the planning, design and maintenance of mobile communication networks. This research paper proposes a method for generating high accuracy DEMs based on SPOT satellite 1A stereo pair images, ground control points (GCP) and Erdas OrthoBASE Pro image processing software. DEMs with 0.2911 m mean error were achieved for the hilly and heavily populated city of Amman. The generated DEM was used to design a mobile communication network resulted in a minimum number of radio base transceiver stations, maximum number of covered regions and less than 2% of dead zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russo, James K.; Armeson, Kent E.; Richardson, Susan, E-mail: srichardson@radonc.wustl.edu
2012-05-01
Purpose: To evaluate bladder and rectal doses using two-dimensional (2D) and 3D treatment planning for vaginal cuff high-dose rate (HDR) in endometrial cancer. Methods and Materials: Ninety-one consecutive patients treated between 2000 and 2007 were evaluated. Seventy-one and 20 patients underwent 2D and 3D planning, respectively. Each patient received six fractions prescribed at 0.5 cm to the superior 3 cm of the vagina. International Commission on Radiation Units and Measurements (ICRU) doses were calculated for 2D patients. Maximum and 2-cc doses were calculated for 3D patients. Organ doses were normalized to prescription dose. Results: Bladder maximum doses were 178% ofmore » ICRU doses (p < 0.0001). Two-cubic centimeter doses were no different than ICRU doses (p = 0.22). Two-cubic centimeter doses were 59% of maximum doses (p < 0.0001). Rectal maximum doses were 137% of ICRU doses (p < 0.0001). Two-cubic centimeter doses were 87% of ICRU doses (p < 0.0001). Two-cubic centimeter doses were 64% of maximum doses (p < 0.0001). Using the first 1, 2, 3, 4 or 5 fractions, we predicted the final bladder dose to within 10% for 44%, 59%, 83%, 82%, and 89% of patients by using the ICRU dose, and for 45%, 55%, 80%, 85%, and 85% of patients by using the maximum dose, and for 37%, 68%, 79%, 79%, and 84% of patients by using the 2-cc dose. Using the first 1, 2, 3, 4 or 5 fractions, we predicted the final rectal dose to within 10% for 100%, 100%, 100%, 100%, and 100% of patients by using the ICRU dose, and for 60%, 65%, 70%, 75%, and 75% of patients by using the maximum dose, and for 68%, 95%, 84%, 84%, and 84% of patients by using the 2-cc dose. Conclusions: Doses to organs at risk vary depending on the calculation method. In some cases, final dose accuracy appears to plateau after the third fraction, indicating that simulation and planning may not be necessary in all fractions. A clinically relevant level of accuracy should be determined and further research conducted to address this issue.« less
Commercialization of an Advanced Gearless Midsize Wind Turbine
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKay, Chris; Ellis, Kyle
The objective of this project was the development and eventual commercialization of a Gearless Wind Turbine of rated power 450 kW. While the product was to be based on existing technology, a significant amount of new engineering effort was expected to be required to ensure maximum efficiency and realistic placement within the market. Expected benefits included positive impact on green job creation in over 15 states as well as strengthen the U.S. domestic capacity for turbine engineering.
New algorithms for optimal reduction of technical risks
NASA Astrophysics Data System (ADS)
Todinov, M. T.
2013-06-01
The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an optimal choice among competing risky prospects. The article demonstrates that the number of activities in a risky prospect is a key consideration in selecting the risky prospect. As a result, the maximum expected profit criterion, widely used for making risk decisions, is fundamentally flawed, because it does not consider the impact of the number of risk-reward activities in the risky prospects. A popular view, that if a single risk-reward bet with positive expected profit is unacceptable then a sequence of such identical risk-reward bets is also unacceptable, has been analysed and proved incorrect.