Science.gov

Sample records for achieve similar accuracies

  1. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  2. [Accuracy of apposition achieved by mandibular osteosyntheses. Stereophotogrammetric study].

    PubMed

    Randzio, J; Ficker, E; Wintges, T; Laser, S

    1989-01-01

    The accuracy of apposition achieved by wire and plate osteosyntheses is measured with the aid of close range stereophotogrammetry in the mandibles of dead bodies. Both osteosynthesis methods are characterized by an increase in the intercondylar distance which, on the average, is about 3.3 mm greater after plate osteosynthesis and about 1.9 mm after wiring. Moreover, osteosyntheses of the base of the mandible may involve a tendency of the condyle to become caudally dislocated.

  3. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  4. Different clinical electrodes achieve similar electrical nerve conduction block

    NASA Astrophysics Data System (ADS)

    Boger, Adam; Bhadra, Narendra; Gustafson, Kenneth J.

    2013-10-01

    Objective. We aim to evaluate the suitability of four electrodes previously used in clinical experiments for peripheral nerve electrical block applications. Approach. We evaluated peripheral nerve electrical block using three such clinical nerve cuff electrodes (the Huntington helix, the Case self-sizing Spiral and the flat interface nerve electrode) and one clinical intramuscular electrode (the Memberg electrode) in five cats. Amplitude thresholds for the block using 12 or 25 kHz voltage-controlled stimulation, onset response, and stimulation thresholds before and after block testing were determined. Main results. Complete nerve block was achieved reliably and the onset response to blocking stimulation was similar for all electrodes. Amplitude thresholds for the block were lowest for the Case Spiral electrode (4 ± 1 Vpp) and lower for the nerve cuff electrodes (7 ± 3 Vpp) than for the intramuscular electrode (26 ± 10 Vpp). A minor elevation in stimulation threshold and reduction in stimulus-evoked urethral pressure was observed during testing, but the effect was temporary and did not vary between electrodes. Significance. Multiple clinical electrodes appear suitable for neuroprostheses using peripheral nerve electrical block. The freedom to choose electrodes based on secondary criteria such as ease of implantation or cost should ease translation of electrical nerve block to clinical practice.

  5. Bounds on achievable accuracy in analog optical linear-algebra processors

    NASA Astrophysics Data System (ADS)

    Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.

    1990-07-01

    Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the

  6. Creating Birds of Similar Feathers: Leveraging Similarity to Improve Teacher-Student Relationships and Academic Achievement

    ERIC Educational Resources Information Center

    Gehlbach, Hunter; Brinkworth, Maureen E.; King, Aaron M.; Hsu, Laura M.; McIntyre, Joseph; Rogers, Todd

    2016-01-01

    When people perceive themselves as similar to others, greater liking and closer relationships typically result. In the first randomized field experiment that leverages actual similarities to improve real-world relationships, we examined the affiliations between 315 9th grade students and their 25 teachers. Students in the treatment condition…

  7. Do you really understand? Achieving accuracy in interracial relationships.

    PubMed

    Holoien, Deborah Son; Bergsieker, Hilary B; Shelton, J Nicole; Alegre, Jan Marie

    2015-01-01

    Accurately perceiving whether interaction partners feel understood is important for developing intimate relationships and maintaining smooth interpersonal exchanges. During interracial interactions, when are Whites and racial minorities likely to accurately perceive how understood cross-race partners feel? We propose that participant race, desire to affiliate, and racial salience moderate accuracy in interracial interactions. Examination of cross-race roommates (Study 1) and interracial interactions with strangers (Study 2) revealed that when race is salient, Whites higher in desire to affiliate with racial minorities failed to accurately perceive the extent to which racial minority partners felt understood. Thus, although the desire to affiliate may appear beneficial, it may interfere with Whites' ability to accurately perceive how understood racial minorities feel. By contrast, racial minorities higher in desire to affiliate with Whites accurately perceived how understood White partners felt. Furthermore, participants' overestimation of how well they understood partners correlated negatively with partners' reports of relationship quality. Collectively, these findings indicate that racial salience and desire to affiliate moderate accurate perceptions of cross-race partners-even in the context of sustained interracial relationships-yielding divergent outcomes for Whites and racial minorities. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  8. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  9. A Novel Accuracy and Similarity Search Structure Based on Parallel Bloom Filters

    PubMed Central

    Yang, Hengcheng; Chen, Zheng

    2016-01-01

    In high-dimensional spaces, accuracy and similarity search by low computing and storage costs are always difficult research topics, and there is a balance between efficiency and accuracy. In this paper, we propose a new structure Similar-PBF-PHT to represent items of a set with high dimensions and retrieve accurate and similar items. The Similar-PBF-PHT contains three parts: parallel bloom filters (PBFs), parallel hash tables (PHTs), and a bitmatrix. Experiments show that the Similar-PBF-PHT is effective in membership query and K-nearest neighbors (K-NN) search. With accurate querying, the Similar-PBF-PHT owns low hit false positive probability (FPP) and acceptable memory costs. With K-NN querying, the average overall ratio and rank-i ratio of the Hamming distance are accurate and ratios of the Euclidean distance are acceptable. It takes CPU time not I/O times to retrieve accurate and similar items and can deal with different data formats not only numerical values. PMID:28053603

  10. Personality Similarity between Teachers and Their Students Influences Teacher Judgement of Student Achievement

    ERIC Educational Resources Information Center

    Rausch, Tobias; Karing, Constance; Dörfler, Tobias; Artelt, Cordula

    2016-01-01

    This study examined personality similarity between teachers and their students and its impact on teacher judgement of student achievement in the domains of reading comprehension and mathematics. Personality similarity was quantified through intraclass correlations between personality characteristics of 409 dyads of German teachers and their…

  11. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  12. Similarity

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1990-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute for Technology (CalTech), the mathematical concept of similarity is presented. he history of and real life applications are discussed using actual film footage and computer animation. Terms used and various concepts of size, shape, ratio, area, and volume are demonstrated. The similarity of polygons, solids, congruent triangles, internal ratios, perimeters, and line segments using the previous mentioned concepts are shown.

  13. Similarity from Multi-Dimensional Scaling: Solving the Accuracy and Diversity Dilemma in Information Filtering

    PubMed Central

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  14. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    PubMed

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term.

  15. Achieving Full Dynamic Similarity with Small-Scale Wind Turbine Models

    NASA Astrophysics Data System (ADS)

    Miller, Mark; Kiefer, Janik; Westergaard, Carsten; Hultmark, Marcus

    2016-11-01

    Power and thrust data as a function of Reynolds number and Tip Speed Ratio are presented at conditions matching those of a full scale turbine. Such data has traditionally been very difficult to acquire due to the large length-scales of wind turbines, and the limited size of conventional wind tunnels. Ongoing work at Princeton University employs a novel, high-pressure wind tunnel (up to 220 atmospheres of static pressure) which uses air as the working fluid. This facility allows adjustment of the Reynolds number (via the fluid density) independent of the Tip Speed Ratio, up to a Reynolds number (based on chord and velocity at the tip) of over 3 million. Achieving dynamic similarity using this approach implies very high power and thrust loading, which results in mechanical loads greater than 200 times those experienced by a similarly sized model in a conventional wind tunnel. In order to accurately report the power coefficients, a series of tests were carried out on a specially designed model turbine drive-train using an external testing bench to replicate tunnel loading. An accurate map of the drive-train performance at various operating conditions was determined. Finally, subsequent corrections to the power coefficient are discussed in detail. Supported by: National Science Foundation Grant CBET-1435254 (program director Gregory Rorrer).

  16. A Comparative Investigation of Several Methods of Aiding College Freshmen to Achieve Grammatical Accuracy in Written Composition.

    ERIC Educational Resources Information Center

    Essary, William Howard

    Two problems were investigated in this study: (1) Which (if any) method of teaching freshmen composition is most effective in helping college students achieve grammatical accuracy? (2) Is improvement in grammatical accuracy paralleled or contrasted with improvement in content? Relatively weak students (low C high-school average and a mean SAT…

  17. Mice and rats achieve similar levels of performance in an adaptive decision-making task.

    PubMed

    Jaramillo, Santiago; Zador, Anthony M

    2014-01-01

    Two opposing constraints exist when choosing a model organism for studying the neural basis of adaptive decision-making: (1) experimental access and (2) behavioral complexity. Available molecular and genetic approaches for studying neural circuits in the mouse fulfill the first requirement. In contrast, it is still under debate if mice can perform cognitive tasks of sufficient complexity. Here we compare learning and performance of mice and rats, the preferred behavioral rodent model, during an acoustic flexible categorization two-alternative choice task. The task required animals to switch between two categorization definitions several times within a behavioral session. We found that both species achieved similarly high performance levels. On average, rats learned the task faster than mice, although some mice were as fast as the average rat. No major differences in subjective categorization boundaries or the speed of adaptation between the two species were found. Our results demonstrate that mice are an appropriate model for the study of the neural mechanisms underlying adaptive decision-making, and suggest they might be suitable for other cognitive tasks as well.

  18. Socially Oriented Motivational Goals and Academic Achievement: Similarities between Native and Anglo Americans

    ERIC Educational Resources Information Center

    Ali, Jinnat; McInerney, Dennis M.; Craven, Rhonda G.; Yeung, Alexander Seeshing; King, Ronnel B.

    2014-01-01

    The authors examined the relations between two socially oriented dimensions of student motivation and academic achievement of Native (Navajo) American and Anglo American students. Using confirmatory factor analysis, a multidimensional and hierarchical model was found to explain the relations between performance and social goals. Four first-order…

  19. Male Learners' Vocabulary Achievement through Concept Mapping and Mind Mapping: Differences and Similarities

    ERIC Educational Resources Information Center

    Tarkashvand, Zahra

    2015-01-01

    While learning English plays an essential role in today's life, vocabulary achievement is helpful to overcome the difficulties of commanding the language. Drawing on data from three months experimental work, this article explores how two mapping strategies affect the learning vocabularies in EFL male learners. While females were studied before,…

  20. Finland and Singapore in PISA 2009: Similarities and Differences in Achievements and School Management

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2014-01-01

    In PISA 2009, Finland and Singapore were both ranked high among the participating nations and have caught much attention internationally. However, a secondary analysis of the means for Reading achievement show that the differences are rather small and are attributable to spurious precision. Hence, the two nations should be considered as being on…

  1. Mismatched partners that achieve postpairing behavioral similarity improve their reproductive success

    PubMed Central

    Laubu, Chloé; Dechaume-Moncharmont, François-Xavier; Motreuil, Sébastien; Schweitzer, Cécile

    2016-01-01

    Behavioral similarity between partners is likely to promote within-pair compatibility and to result in better reproductive success. Therefore, individuals are expected to choose a partner that is alike in behavioral type. However, mate searching is very costly and does not guarantee finding a matching partner. If mismatched individuals pair, they may benefit from increasing their similarity after pairing. We show in a monogamous fish species—the convict cichlid—that the behavioral similarity between mismatched partners can increase after pairing. This increase resulted from asymmetrical adjustment because only the reactive individual became more alike its proactive partner, whereas the latter did not change its behavior. The mismatched pairs that increased their similarity not only improved their reproductive success but also raised it up to the level of matched pairs. While most studies assume that assortative mating results from mate choice, our study suggests that postpairing adjustment could be an alternative explanation for the high behavioral similarity between partners observed in the field. It also explains why interindividual behavioral differences can be maintained within a given population. PMID:26973869

  2. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  3. Accuracy of Teachers' Judgments of Students' Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Sudkamp, Anna; Kaiser, Johanna; Moller, Jens

    2012-01-01

    This meta-analysis summarizes empirical results on the correspondence between teachers' judgments of students' academic achievement and students' actual academic achievement. The article further investigates theoretically and methodologically relevant moderators of the correlation between the two measures. Overall, 75 studies reporting…

  4. Accurate rotational constants for linear interstellar carbon chains: achieving experimental accuracy

    NASA Astrophysics Data System (ADS)

    Etim, Emmanuel E.; Arunan, Elangannan

    2017-01-01

    Linear carbon chain molecular species remain the dominant theme in interstellar chemistry. Their continuous astronomical observation depends on the availability of accurate spectroscopic parameters. Accurate rotational constants are reported for hundreds of molecular species of astrophysical, spectroscopy and chemical interests from the different linear carbon chains; C_{{n}}H, C_{{n}}H-, C_{{n}}N, C_{{n}}N-, C_{{n}}O, C_{{n}}S, HC_{{n}}S, C_{{n}}Si, CH3(CC)_{{n}}H, HC_{{n}}N, DC_{2{n}+1}N, HC_{2{n}}NC, and CH3(C≡C)_{{n}}CN using three to four moments of inertia calculated from the experimental rotational constants coupled with those obtained from the optimized geometries at the Hartree Fock level. The calculated rotational constants are obtained from the corrected moments of inertia at the Hartfree Fock geometries. The calculated rotational constants show accuracy of few kHz below irrespective of the chain length and terminating groups. The obtained accuracy of few kHz places these rotational constants as excellent tools for both astronomical and laboratory detection of these molecular species of astrophysical interest. From the numerous unidentified lines from different astronomical surveys, transitions corresponding to known and new linear carbon chains could be found using these rotational constants. The astrophysical, spectroscopic and chemical implications of these results are discussed.

  5. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, D.P.; Storey, J.C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands—29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1σ) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1σ) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications.

  6. a Method to Achieve Large Volume, High Accuracy Photogrammetric Measurements Through the Use of AN Actively Deformable Sensor Mounting Platform

    NASA Astrophysics Data System (ADS)

    Sargeant, B.; Robson, S.; Szigeti, E.; Richardson, P.; El-Nounu, A.; Rafla, M.

    2016-06-01

    When using any optical measurement system one important factor to consider is the placement of the sensors in relation to the workpiece being measured. When making decisions on sensor placement compromises are necessary in selecting the best placement based on the shape and size of the object of interest and the desired resolution and accuracy. One such compromise is in the distance the sensors are placed from the measurement surface, where a smaller distance gives a higher spatial resolution and local accuracy and a greater distance reduces the number of measurements necessary to cover a large area reducing the build-up of errors between measurements and increasing global accuracy. This paper proposes a photogrammetric approach whereby a number of sensors on a continuously flexible mobile platform are used to obtain local measurements while the position of the sensors is determined by a 6DoF tracking solution and the results combined to give a single set of measurement data within a continuous global coordinate system. The ability of this approach to achieve both high accuracy measurement and give results over a large volume is then tested and areas of weakness to be improved upon are identified.

  7. Mechanized pivot shift test achieves greater accuracy than manual pivot shift test.

    PubMed

    Musahl, Volker; Voos, James; O'Loughlin, Padhraig F; Stueber, Volker; Kendoff, Daniel; Pearle, Andrew D

    2010-09-01

    The objective of this study was to design a navigated mechanized pivot shift test setup and evaluate its repeatability in the ACL-deficient knee. It was hypothesized that translations and rotations measured with the mechanized pivot shift would be more repeatable when compared to those obtained with a manual pivot shift. Twelve fresh frozen cadaveric hip-to-toe whole lower extremities were used for this study. A manual pivot shift test was performed in the intact knee and in the ACL-deficient knee and was repeated three times. A navigation system simultaneously recorded tibial translation and rotation. The mechanized pivot shift test consists of a modified continuous passive motion (CPM) machine and a custom-made foot holder to allow for the application of internal rotation moments at the knee. Valgus moments were achieved by a 45 degrees tilt of the CPM machine with respect to the supine position and a Velcro strap secured across the proximal tibia. The mechanized pivot shift was repeated three times. Repeated measures ANOVA was used to compare manual and mechanized pivot shift testing. An intra-class correlation coefficient (ICC) was used to determine variability within each knee at each testing condition. In the ACL-deficient knee, translation with manual pivot shift testing (11.7 +/- 2.6 mm) was significantly higher than with mechanized pivot shift testing (7.4 +/- 2.5 mm; p < 0.05). Rotation with the manual pivot shift testing (18.6 +/- 5.4 degrees) was also significantly higher than with mechanized pivot shift testing (11.0 +/- 2.3 degrees; p < 0.05). The intra-class ICC for translations was 0.76 for manual pivot shift and 0.92 for the mechanized pivot shift test. The intra-class ICC for rotations was 0.89 for manual pivot shift and 0.82 for the mechanized pivot shift test. This study introduced a modified CPM for mechanized pivot shift testing. Although recorded translations and rotations with the mechanized pivot shift test were lower than with manual

  8. Sling exercise and traditional warm-up have similar effects on the velocity and accuracy of throwing.

    PubMed

    Huang, Juliet S; Pietrosimone, Brian G; Ingersoll, Christopher D; Weltman, Arthur L; Saliba, Susan A

    2011-06-01

    Throwing is a complex motion that involves the entire body and often puts an inordinate amount of stress on the shoulder and the arm. Warm-up prepares the body for work and can enhance performance. Sling-based exercise (SE) has been theorized to activate muscles, particularly the stabilizers, in a manner beneficial for preactivity warm-up, yet this hypothesis has not been tested. Our purpose was to determine if a warm-up using SE would increase throwing velocity and accuracy compared to a traditional, thrower's 10 warm-up program. Division I baseball players (nonpitchers) (16 men, age: 19.6 ± 1.3, height: 184.2 ± 6.2 cm, mass: 76.9 ± 19.2 kg) volunteered to participate in this crossover study. All subjects underwent both a warm-up routine using a traditional method (Thrower's 10 exercises) and a warm-up routine using closed kinetic chain SE methods (RedCord) on different days separated by 72 hours. Ball velocity and accuracy measures were obtained on 10 throws after either the traditional and SE warm-up regimens. Velocity was recorded using a standard Juggs radar gun (JUGS; Tualatin, OR, USA). Accuracy was recorded using a custom accuracy target. An Analysis of covariance was performed, with the number of throws recorded before the testing was used as a covariate and p < 0.05 was set a priori. There were no statistical differences between the SE warm-up and Thrower's 10 warm-up for throwing velocity (SE: 74.7 ± 7.5 mph, Thrower's 10: 74.6 ± 7.3 mph p = 0.874) or accuracy (SE: 115.6 ± 53.7 cm, Thrower's 10: 91.8 ± 55 cm, p = 0.136). Warming up with SE produced equivalent throwing velocity and accuracy compared to the Thrower's 10 warm-up method. Thus, SE provides an alternative to traditional warm-up.

  9. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  10. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  11. Proficiency testing linked to the national reference system for the clinical laboratory: a proposal for achieving accuracy.

    PubMed

    Lasky, F D

    1992-07-01

    I propose using proficiency testing (PT) to achieve one of the important goals of CLIA: accurate and reliable clinical testing. Routine methods for the clinical laboratory are traceable to Definitive (DM) or Reference Methods (RM) or to Methodological Principles (MP) through a modification of the National Reference System for the Clinical Laboratory. PT is the link used to monitor consistent field performance. Although PT has been effective as a relative measure of laboratory performance, the technical limitations of PT fluids and of routine methods currently in use make it unlikely that PT alone can be used as a reliable measure of laboratory accuracy. Instead, I recommend calibration of routine systems through correlation to DM, RM, or MP with use of patients' specimens. The manufacturer is in the best position to assume this responsibility because of also being responsible for consistent, reliable product. Analysis of different manufactured batches of reagent would be compared with predetermined goals for precision and accuracy, as illustrated with data from product testing of Kodak Ektachem clinical chemistry slides. Adoption of this proposal would give manufacturers of PT materials, manufacturers of analytical systems, PT providers, and government agencies time to understand and resolve sources of error that limit the utility of PT for the job required by law.

  12. Cognitive Processing Profiles of School-Age Children Who Meet Low-Achievement, IQ-Discrepancy, or Dual Criteria for Underachievement in Oral Reading Accuracy

    ERIC Educational Resources Information Center

    Van Santen, Frank W.

    2012-01-01

    The purpose of this study was to compare the cognitive processing profiles of school-age children (ages 7 to 17) who met criteria for underachievement in oral reading accuracy based on three different methods: 1) use of a regression-based IQ-achievement discrepancy only (REGonly), 2) use of a low-achievement cutoff only (LAonly), and 3) use of a…

  13. Similarities and Differences in Domain-Specific and Global Self-Evaluations of Learning-Disabled, Behaviorally Disordered, and Normally Achieving Adolescents.

    ERIC Educational Resources Information Center

    Harter, Susan; Whitesell, Nancy R.; Junkin, Loretta J.

    1998-01-01

    Documented similarities and differences in the domain-specific and global self-evaluations of 235 normally achieving, 118 learning disabled, and 70 behaviorally disordered adolescents. Factor analysis revealed eight discrete self-concept domains for each group. Discusses similarities and differences and within-group processes. Contains 46…

  14. Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

    ERIC Educational Resources Information Center

    Nikelshpur, Dmitry O.

    2014-01-01

    Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…

  15. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    PubMed

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy.

  16. Strategies for Achieving High Sequencing Accuracy for Low Diversity Samples and Avoiding Sample Bleeding Using Illumina Platform

    PubMed Central

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802

  17. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  18. Utilizing artificial neural networks in MATLAB to achieve parts-per-billion mass measurement accuracy with a fourier transform ion cyclotron resonance mass spectrometer.

    PubMed

    Williams, D Keith; Kovach, Alexander L; Muddiman, David C; Hanck, Kenneth W

    2009-07-01

    Fourier transform ion cyclotron resonance mass spectrometry has the ability to realize exceptional mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement, as well as relative ion abundance of a given species. Artificial neural network calibration in conjunction with automatic gain control (AGC) is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. In addition, artificial neural network calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level.

  19. Molecular similarity and property similarity.

    PubMed

    Barbosa, Frédérique; Horvath, Dragos

    2004-01-01

    This paper reviews the main efforts undertaken up to date in order to understand, rationalize and apply the similarity principle (similar compounds=>similar properties) as a computational tool in modern drug discovery. The best suited mathematical expression of this classical working hypothesis of medicinal chemistry needs to be carefully chosen (out of the virtually infinite possible implementations in terms of molecular descriptors and molecular similarity metrics), in order to achieve an optimal validation of the hypothesis that molecules that are neighbors in the Structural Space will also display similar properties. This overview will show why no single "absolute" measure of molecular similarity can be conceived, and why molecular similarity scores should be considered tunable tools that need to be adapted to each problem to solve.

  20. Similarity Learning of Manifold Data.

    PubMed

    Chen, Si-Bao; Ding, Chris H Q; Luo, Bin

    2015-09-01

    Without constructing adjacency graph for neighborhood, we propose a method to learn similarity among sample points of manifold in Laplacian embedding (LE) based on adding constraints of linear reconstruction and least absolute shrinkage and selection operator type minimization. Two algorithms and corresponding analyses are presented to learn similarity for mix-signed and nonnegative data respectively. The similarity learning method is further extended to kernel spaces. The experiments on both synthetic and real world benchmark data sets demonstrate that the proposed LE with new similarity has better visualization and achieves higher accuracy in classification.

  1. Accuracy Test of Microsoft Kinect for Human Morphologic Measurements

    NASA Astrophysics Data System (ADS)

    Molnár, B.; Toth, C. K.; Detrekői, A.

    2012-08-01

    The Microsoft Kinect sensor, a popular gaming console, is widely used in a large number of applications, including close-range 3D measurements. This low-end device is rather inexpensive compared to similar active imaging systems. The Kinect sensors include an RGB camera, an IR projector, an IR camera and an audio unit. The human morphologic measurements require high accuracy with fast data acquisition rate. To achieve the highest accuracy, the depth sensor and the RGB camera should be calibrated and co-registered to achieve high-quality 3D point cloud as well as optical imagery. Since this is a low-end sensor, developed for different purpose, the accuracy could be critical for 3D measurement-based applications. Therefore, two types of accuracy test are performed: (1) for describing the absolute accuracy, the ranging accuracy of the device in the range of 0.4 to 15 m should be estimated, and (2) the relative accuracy of points depending on the range should be characterized. For the accuracy investigation, a test field was created with two spheres, while the relative accuracy is described by sphere fitting performance and the distance estimation between the sphere center points. Some other factors can be also considered, such as the angle of incidence or the material used in these tests. The non-ambiguity range of the sensor is from 0.3 to 4 m, but, based on our experiences, it can be extended up to 20 m. Obviously, this methodology raises some accuracy issues which make accuracy testing really important.

  2. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  3. Privacy-preserving matching of similar patients.

    PubMed

    Vatsalan, Dinusha; Christen, Peter

    2016-02-01

    The identification of similar entities represented by records in different databases has drawn considerable attention in many application areas, including in the health domain. One important type of entity matching application that is vital for quality healthcare analytics is the identification of similar patients, known as similar patient matching. A key component of identifying similar records is the calculation of similarity of the values in attributes (fields) between these records. Due to increasing privacy and confidentiality concerns, using the actual attribute values of patient records to identify similar records across different organizations is becoming non-trivial because the attributes in such records often contain highly sensitive information such as personal and medical details of patients. Therefore, the matching needs to be based on masked (encoded) values while being effective and efficient to allow matching of large databases. Bloom filter encoding has widely been used as an efficient masking technique for privacy-preserving matching of string and categorical values. However, no work on Bloom filter-based masking of numerical data, such as integer (e.g. age), floating point (e.g. body mass index), and modulus (numbers wrap around upon reaching a certain value, e.g. date and time), which are commonly required in the health domain, has been presented in the literature. We propose a framework with novel methods for masking numerical data using Bloom filters, thereby facilitating the calculation of similarities between records. We conduct an empirical study on publicly available real-world datasets which shows that our framework provides efficient masking and achieves similar matching accuracy compared to the matching of actual unencoded patient records.

  4. Reconstructing propagation networks with temporal similarity

    PubMed Central

    Liao, Hao; Zeng, An

    2015-01-01

    Node similarity significantly contributes to the growth of real networks. In this paper, based on the observed epidemic spreading results we apply the node similarity metrics to reconstruct the underlying networks hosting the propagation. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops nearly to zero. To improve the similarity-based reconstruction method, we propose a temporal similarity metric which takes into account the time information of the spreading. The reconstruction results are remarkably improved with the new method. PMID:26086198

  5. Reconstructing propagation networks with temporal similarity.

    PubMed

    Liao, Hao; Zeng, An

    2015-06-18

    Node similarity significantly contributes to the growth of real networks. In this paper, based on the observed epidemic spreading results we apply the node similarity metrics to reconstruct the underlying networks hosting the propagation. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops nearly to zero. To improve the similarity-based reconstruction method, we propose a temporal similarity metric which takes into account the time information of the spreading. The reconstruction results are remarkably improved with the new method.

  6. Diffusion-like recommendation with enhanced similarity of objects

    NASA Astrophysics Data System (ADS)

    An, Ya-Hui; Dong, Qiang; Sun, Chong-Jing; Nie, Da-Cheng; Fu, Yan

    2016-11-01

    In the last decade, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of this exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones.

  7. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  8. Transverse Mercator with an accuracy of a few nanometers

    NASA Astrophysics Data System (ADS)

    Karney, Charles F. F.

    2011-08-01

    Implementations of two algorithms for the transverse Mercator projection are described; these achieve accuracies close to machine precision. One is based on the exact equations of Thompson and Lee and the other uses an extension of Krüger's series for the mapping to higher order. The exact method provides an accuracy of 9 nm over the entire ellipsoid, while the errors in the series method are less than 5 nm within 3900 km of the central meridian. In each case, the meridian convergence and scale are also computed with similar accuracy. The speed of the series method is competitive with other less accurate algorithms and the exact method is about five times slower.

  9. Achieving Salary Equity

    ERIC Educational Resources Information Center

    Nevill, Dorothy D.

    1975-01-01

    Three techniques are outlined for use by higher education institutions to achieve salary equity: salary prediction (using various statistical procedures), counterparting (comparing salaries of persons of similar rank), and grievance procedures. (JT)

  10. Walking on a user similarity network towards personalized recommendations.

    PubMed

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  11. Walking on a User Similarity Network towards Personalized Recommendations

    PubMed Central

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  12. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed Central

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B.; Hartz, Sarah M.; Johnson, Eric O.; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L.

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen’s kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  13. Student Metacognitive Monitoring: Predicting Test Achievement from Judgment Accuracy

    ERIC Educational Resources Information Center

    Valdez, Alfred

    2013-01-01

    Metacognitive monitoring processes have been shown to be critical determinants of human learning. Metacognitive monitoring consist of various knowledge estimates that enable learners to engage in self-regulatory processes important for both the acquisition of knowledge and the monitoring of one's knowledge when engaged in assessment. This study…

  14. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  15. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  16. Biosimilar Insulins: How Similar is Similar?

    PubMed Central

    Heinemann, Lutz; Hompesch, Marcus

    2011-01-01

    Biosimilar insulins (BIs) are viewed as commercially attractive products by a number of companies. In order to obtain approval in the European Union or the United States, where there is not a single BI currently on the market, a manufacturer needs to demonstrate that a given BI has a safety and efficacy profile that is similar to that of the “original” insulin formulation that is already on the market. As trivial as this may appear at first glance, it is not trivial at all for a good number of reasons that will be discussed in this commentary. As with protein manufacturing, modifications in the structure of the insulin molecule can take place (which can have serious consequences for the biological effects induced), so a rigid and careful assessment is absolutely necessary. The example of Marvel's failed application with the European Medicines Agency provides insights into the regulatory and clinical challenges surrounding the matter of BI. Although a challenging BI approval process might be regarded as a hurdle to keep companies out of certain markets, it is fair to say that the potential safety and efficacy issues surrounding BI are substantial and relevant and do warrant a careful and evidence-driven approval process. PMID:21722590

  17. Molecular similarity measures.

    PubMed

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  18. Molecular similarity measures.

    PubMed

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2004-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of a chemistry space. Although all three concepts molecular similarity, molecular representation, and chemistry space are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations, that is, representations of the same mathematical form, into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another.

  19. The Gender Similarities Hypothesis

    ERIC Educational Resources Information Center

    Hyde, Janet Shibley

    2005-01-01

    The differences model, which argues that males and females are vastly different psychologically, dominates the popular media. Here, the author advances a very different view, the gender similarities hypothesis, which holds that males and females are similar on most, but not all, psychological variables. Results from a review of 46 meta-analyses…

  20. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  1. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  2. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  3. The gender similarities hypothesis.

    PubMed

    Hyde, Janet Shibley

    2005-09-01

    The differences model, which argues that males and females are vastly different psychologically, dominates the popular media. Here, the author advances a very different view, the gender similarities hypothesis, which holds that males and females are similar on most, but not all, psychological variables. Results from a review of 46 meta-analyses support the gender similarities hypothesis. Gender differences can vary substantially in magnitude at different ages and depend on the context in which measurement occurs. Overinflated claims of gender differences carry substantial costs in areas such as the workplace and relationships.

  4. Achievability for telerobotic systems

    NASA Astrophysics Data System (ADS)

    Kress, Reid L.; Draper, John V.; Hamel, William R.

    2001-02-01

    Methods are needed to improve the capabilities of autonomous robots to perform tasks that are difficult for contemporary robots, and to identify those tasks that robots cannot perform. Additionally, in the realm of remote handling, methods are needed to assess which tasks and/or subtasks are candidates for automation. We are developing a new approach to understanding the capability of autonomous robotic systems. This approach uses formalized methods for determining the achievability of tasks for robots, that is, the likelihood that an autonomous robot or telerobot can successfully complete a particular task. Any autonomous system may be represented in achievability space by the volume describing that system's capabilities within the 3-axis space delineated by perception, cognition, and action. This volume may be thought of as a probability density with achievability decreasing as the distance from the centroid of the volume increases. Similarly, any task may be represented within achievability space. However, as tasks have more finite requirements for perception, cognition, and action, each may be represented as a point (or, more accurately, as a small sphere) within achievability space. Analysis of achievability can serve to identify, a priori, the survivability of robotic systems and the likelihood of mission success; it can be used to plan a mission or portions of a mission; it can be used to modify a mission plan to accommodate unpredicted occurrences; it can also serve to identify needs for modifications to robotic systems or tasks to improve achievability. .

  5. Additive Similarity Trees

    ERIC Educational Resources Information Center

    Sattath, Shmuel; Tversky, Amos

    1977-01-01

    Tree representations of similarity data are investigated. Hierarchical clustering is critically examined, and a more general procedure, called the additive tree, is presented. The additive tree representation is then compared to multidimensional scaling. (Author/JKS)

  6. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  7. Enhancing and evaluating diagnostic accuracy.

    PubMed

    Swets, J A; Getty, D J; Pickett, R M; D'Orsi, C J; Seltzer, S E; McNeil, B J

    1991-01-01

    Techniques that may enhance diagnostic accuracy in clinical settings were tested in the context of mammography. Statistical information about the relevant features among those visible in a mammogram and about their relative importances in the diagnosis of breast cancer was the basis of two decision aids for radiologists: a checklist that guides the radiologist in assigning a scale value to each significant feature of the images of a particular case, and a computer program that merges those scale values optimally to estimate a probability of malignancy. A test set of approximately 150 proven cases (including normals and benign and malignant lesions) was interpreted by six radiologists, first in their usual manner and later with the decision aids. The enhancing effect of these feature-analytic techniques was analyzed across subsets of cases that were restricted progressively to more and more difficult cases, where difficulty was defined in terms of the radiologists' judgements in the standard reading condition. Accuracy in both standard and enhanced conditions decreased regularly and substantially as case difficulty increased, but differentially, such that the enhancement effect grew regularly and substantially. For the most difficult case sets, the observed increases in accuracy translated into an increase of about 0.15 in sensitivity (true-positive proportion) for a selected specificity (true-negative proportion) of 0.85 or a similar increase in specificity for a selected sensitivity of 0.85. That measured accuracy can depend on case-set difficulty to different degrees for two diagnostic approaches has general implications for evaluation in clinical medicine. Comparative, as well as absolute, assessments of diagnostic performances--for example, of alternative imaging techniques--may be distorted by inadequate treatments of this experimental variable. Subset analysis, as defined and illustrated here, can be useful in alleviating the problem.

  8. The Qualitative Similarity Hypothesis

    ERIC Educational Resources Information Center

    Paul, Peter V.; Lee, Chongmin

    2010-01-01

    Evidence is presented for the qualitative similarity hypothesis (QSH) with respect to children and adolescents who are d/Deaf or hard of hearing. The primary focus is on the development of English language and literacy skills, and some information is provided on the acquisition of English as a second language. The QSH is briefly discussed within…

  9. Critical thinking and accuracy of nurses' diagnoses.

    PubMed

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.

  10. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  11. A Data Transfer Fusion Method for Discriminating Similar Spectral Classes

    PubMed Central

    Wang, Qingyan; Zhang, Junping

    2016-01-01

    Hyperspectral data provide new capabilities for discriminating spectrally similar classes, but such class signatures sometimes will be difficult to analyze. To incorporate reliable useful information could help, but at the same time, may also lead increased dimensionality of the feature vector making the hyperspectral data larger than expected. It is challenging to apply discriminative information from these training data to testing data that are not in the same feature space and with different data distributions. A data fusion method based on transfer learning is proposed, in which transfer learning is introduced into boosting algorithm, and other out-date data are used to instruct hyperspectral image classification. In order to validate the method, experiments are conducted on EO-1 Hyperion hyperspectral data and ROSIS hyperspectral data. Significant improvements have been achieved in terms of accuracy compared to the results generated by conventional classification approaches. PMID:27854238

  12. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  13. Integrating conventional classifiers with a GIS expert system to increase the accuracy of invasive species mapping

    NASA Astrophysics Data System (ADS)

    Masocha, Mhosisi; Skidmore, Andrew K.

    2011-06-01

    Mapping the cover of invasive species using remotely sensed data alone is challenging, because many invaders occur as mid-level canopy species or as subtle understorey species and therefore contribute little to the spectral signatures captured by passive remote sensing devices. In this study, two common non-parametric classifiers namely, the neural network and support vector machine were used to map four cover classes of the invasive shrub Lantana camara in a protected game reserve and the adjacent area under communal land management in Zimbabwe. These classifiers were each combined with a geographic information system (GIS) expert system, in order to test whether the new hybrid classifiers yielded significantly more accurate invasive species cover maps than the single classifiers. The neural network, when used on its own, mapped the cover of L. camara with an overall accuracy of 71% and a Kappa index of agreement of 0.61. When the neural network was combined with an expert system, the overall accuracy and Kappa index of agreement significantly increased to 83% and 0.77, respectively. Similarly, the support vector machine achieved an overall accuracy of 64% with a Kappa index of agreement of 0.52, whereas the hybrid support vector machine and expert system classifier achieved a significantly higher overall accuracy of 76% and a Kappa index of agreement of 0.67. These results suggest that integrating conventional image classifiers with an expert system increases the accuracy of invasive species mapping.

  14. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  15. Similar enzymes, different structures

    PubMed Central

    Tarasev, Michael; Kaddis, Catherine S.; Yin, Sheng; Loo, Joseph A.; Burgner, John; Ballou, David P.

    2007-01-01

    Phthalate dioxygenase (PDO) is a member of a class of bacterial oxygenases that contain both Rieske [2Fe-2S] and Fe(II) mononuclear centers. Recent crystal structures of several Rieske dioxygenases showed that they exist as α3β3 multimers with subunits arranged head-to-tail in α and β stacked planar consists of only α-subunits, remains to be solved. Although similar to other Rieske dioxygenases in many aspects, PDO was shown to differ in the mechanism of catalysis. Gel filtration and analytical centrifugation experiments, supplemented with mass spectrometric analysis (both ESI-MS and ESI-GEMMA), in this work showed a hexameric arrangement of subunits in the PDO multimer. Our proposed model for the subunit arrangement in PDO postulates two α3 planar rings one on top the other, similar to the α3β3 arrangement in other Rieske dioxygenases. Unlike other Rieske dioxygenases, this arrangement brings two Rieske and two mononuclear centers, all on separate subunits, into proximity, allowing their cooperation for catalysis. Potential reasons necessitating this unusual structural arrangement are discussed. PMID:17764654

  16. Similarity transformed semiclassical dynamics

    NASA Astrophysics Data System (ADS)

    Van Voorhis, Troy; Heller, Eric J.

    2003-12-01

    In this article, we employ a recently discovered criterion for selecting important contributions to the semiclassical coherent state propagator [T. Van Voorhis and E. J. Heller, Phys. Rev. A 66, 050501 (2002)] to study the dynamics of many dimensional problems. We show that the dynamics are governed by a similarity transformed version of the standard classical Hamiltonian. In this light, our selection criterion amounts to using trajectories generated with the untransformed Hamiltonian as approximate initial conditions for the transformed boundary value problem. We apply the new selection scheme to some multidimensional Henon-Heiles problems and compare our results to those obtained with the more sophisticated Herman-Kluk approach. We find that the present technique gives near-quantitative agreement with the the standard results, but that the amount of computational effort is less than Herman-Kluk requires even when sophisticated integral smoothing techniques are employed in the latter.

  17. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  18. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  19. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  20. Selection of USSR foreign similarity regions

    NASA Technical Reports Server (NTRS)

    Disler, J. M. (Principal Investigator)

    1982-01-01

    The similarity regions in the United States and Canada were selected to parallel the conditions that affect labeling and classification accuracies in the U.S.S.R. indicator regions. In addition to climate, a significant condition that affects labeling and classification accuracies in the U.S.S.R. is the proportion of barley and wheat grown in a given region (based on sown areas). The following regions in the United States and Canada were determined to be similar to the U.S.S.R. indicator regions: (1) Montana agrophysical unit (APU) 104 corresponds to the Belorussia high barley region; (2) North Dakota and Minnesota APU 20 and secondary region southern Manitoba and Saskatchewan correspond to the Ural RSFSR barley and spring wheat region; (3) Montana APU 23 corresponds to he North Caucasus barley and winter wheat region. Selection criteria included climates, crop type, crop distribution, growth cycles, field sizes, and field shapes.

  1. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  2. Similarity effects in visual working memory.

    PubMed

    Jiang, Yuhong V; Lee, Hyejin J; Asaad, Anthony; Remington, Roger

    2016-04-01

    Perceptual similarity is an important property of multiple stimuli. Its computation supports a wide range of cognitive functions, including reasoning, categorization, and memory recognition. It is important, therefore, to determine why previous research has found conflicting effects of inter-item similarity on visual working memory. Studies reporting a similarity advantage have used simple stimuli whose similarity varied along a featural continuum. Studies reporting a similarity disadvantage have used complex stimuli from either a single or multiple categories. To elucidate stimulus conditions for similarity effects in visual working memory, we tested memory for complex stimuli (faces) whose similarity varied along a morph continuum. Participants encoded 3 morphs generated from a single face identity in the similar condition, or 3 morphs generated from different face identities in the dissimilar condition. After a brief delay, a test face appeared at one of the encoding locations for participants to make a same/different judgment. Two experiments showed that similarity enhanced memory accuracy without changing the response criterion. These findings support previous computational models that incorporate featural variance as a component of working memory load. They delineate limitations of models that emphasize cortical resources or response decisions.

  3. Recognition of similar objects using simulated prosthetic vision.

    PubMed

    Hu, Jie; Xia, Peng; Gu, Chaochen; Qi, Jin; Li, Sheng; Peng, Yinghong

    2014-02-01

    Due to the limitations of existing techniques, even the most advanced visual prostheses, using several hundred electrodes to transmit signals to the visual pathway, restrict sensory function and visual information. To identify the bottlenecks and guide prosthesis designing, psychophysics simulations of a visual prosthesis in normally sighted individuals are desirable. In this study, psychophysical experiments of discriminating objects with similar profiles were used to test the effects of phosphene array parameters (spatial resolution, gray scale, distortion, and dropout rate) on visual information using simulated prosthetic vision. The results showed that the increase in spatial resolution and number of gray levels and the decrease in phosphene distortion and dropout rate improved recognition performance, and the accuracy is 78.5% under the optimum condition (resolution: 32 × 32, gray level: 8, distortion: k = 0, dropout: 0%). In combined parameter tests, significant facial recognition accuracy was achieved for all the images with k = 0.1 distortion and 10% dropout. Compared with other experiments, we find that different objects do not show specific sensitivity to the changes of parameters and visual information is not nearly enough even under the optimum condition. The results suggests that higher spatial resolution and more gray levels are required for visual prosthetic devices and further research on image processing strategies to improve prosthetic vision is necessary, especially when the wearers have to accomplish more than simple visual tasks.

  4. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  5. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  6. Using context and similarity for face and location identification

    NASA Astrophysics Data System (ADS)

    Davis, Marc; Smith, Michael; Stentiford, Fred; Bamidele, Adetokunbo; Canny, John; Good, Nathan; King, Simon; Janakiraman, Rajkumar

    2006-01-01

    This paper describes a new approach to the automatic detection of human faces and places depicted in photographs taken on cameraphones. Cameraphones offer a unique opportunity to pursue new approaches to media analysis and management: namely to combine the analysis of automatically gathered contextual metadata with media content analysis to fundamentally improve image content recognition and retrieval. Current approaches to content-based image analysis are not sufficient to enable retrieval of cameraphone photos by high-level semantic concepts, such as who is in the photo or what the photo is actually depicting. In this paper, new methods for determining image similarity are combined with analysis of automatically acquired contextual metadata to substantially improve the performance of face and place recognition algorithms. For faces, we apply Sparse-Factor Analysis (SFA) to both the automatically captured contextual metadata and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our database of photos, which is 40% better than media analysis alone. For location, grouping visually similar photos using a model of Cognitive Visual Attention (CVA) in conjunction with contextual metadata analysis yields a significant improvement over color histogram and CVA methods alone. We achieve an improvement in location retrieval precision from 30% precision for color histogram and CVA image analysis, to 55% precision using contextual metadata alone, to 67% precision achieved by combining contextual metadata with CVA image analysis. The combination of context and content analysis produces results that can indicate the faces and places depicted in cameraphone photos significantly better than image analysis or context analysis alone. We believe these results indicate the possibilities of a new context-aware paradigm for image analysis.

  7. Lingos, finite state machines, and fast similarity searching.

    PubMed

    Grant, J Andrew; Haigh, James A; Pickup, Barry T; Nicholls, Anthony; Sayle, Roger A

    2006-01-01

    We apply a recently published method of text-based molecular similarity searching (LINGO) to standard data sets for the purpose of quantifying the accuracy of the approach. Our implementation is based on a pattern-matching finite state machine (FSM) which results in fast search times. The accuracy of LINGO is demonstrated to be comparable to that of a path-based fingerprint and offers a simple yet effective method for similarity searching.

  8. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources.

  9. The Mechanics of Human Achievement.

    PubMed

    Duckworth, Angela L; Eichstaedt, Johannes C; Ungar, Lyle H

    2015-07-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers.

  10. The Mechanics of Human Achievement

    PubMed Central

    Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.

    2015-01-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers. PMID:26236393

  11. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  12. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  13. Graded Achievement, Tested Achievement, and Validity

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  14. SoRS: Social recommendation using global rating reputation and local rating similarity

    NASA Astrophysics Data System (ADS)

    Qian, Fulan; Zhao, Shu; Tang, Jie; Zhang, Yanping

    2016-11-01

    Recommendation is an important and also challenging problem in online social networks. It needs to consider not only users' personalized interests, but also social relations between users. Indeed, in practice, users are often inclined to accept recommendations from friends or opinion leaders (users with high reputations). In this paper, we present a novel recommendation framework, social recommendation using global rating reputation and local rating similarity, which combine user reputation and social similarity based on ratings. User reputation can be obtained by iteratively calculating the correlation of historical ratings of user and intrinsic qualities of items. We view the user reputation as the user's global influence and the similarity based on rating of social relation as the user's local influence, introduce it in the basic social recommender model. Thus users with high reputation have a strong influence on the others, and on the other hand, the effect of a user with low reputation has been weakened. The recommendation accuracy of proposed framework can be improved by effectively removing nature noise because of less rigorous user ratings and strengthening the effect of user influence with high reputation. We also improve the similarity based on ratings by avoiding the high similarity with the less common ratings between friends. We evaluate our approach on three datasets including Movielens, Epinions and Douban. Empirical results demonstrate that proposed framework achieves significant improvements on recommendation accuracy. User reputation and local similarity which are both based on ratings have a lot of helpful in improvement of prediction accuracy. The reputation also can help to improve the recommendation precision with the small training sets.

  15. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  16. A configurable-hardware document-similarity classifier to detect web attacks.

    SciTech Connect

    Ulmer, Craig D.; Gokhale, Maya

    2010-04-01

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  17. Investigating the Accuracy of Teachers' Word Frequency Intuitions

    ERIC Educational Resources Information Center

    McCrostie, James

    2007-01-01

    Previous research has found that native English speakers can judge, with a relatively high degree of accuracy, the frequency of words in the English language. However, there has been little investigation of the ability to judge the frequency of high and middle frequency words. Similarly, the accuracy of EFL teachers' frequency judgements remains…

  18. Accuracy of References in Ten Library Science Journals.

    ERIC Educational Resources Information Center

    Pope, Nancy N.

    1992-01-01

    A study of 100 article citations from 11 library science journals showed only 45 article citations that were completely free of errors, while 11 had major errors--i.e., errors preventing or hindering location of the reference--and the remaining 44 had minor errors. Citation accuracy in library science journals appears similar to accuracy in other…

  19. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  20. Mixed-List Phonological Similarity Effects in Delayed Serial Recall

    ERIC Educational Resources Information Center

    Farrell, Simon

    2006-01-01

    Recent experiments have shown that placing dissimilar items on lists of phonologically similar items enhances accuracy of ordered recall of the dissimilar items [Farrell, S., & Lewandowsky, S. (2003). Dissimilar items benefit from phonological similarity in serial recall. "Journal of Experimental Psychology: Learning, Memory, and Cognition," 29,…

  1. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    SciTech Connect

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  2. Feature matching algorithm based on spatial similarity

    NASA Astrophysics Data System (ADS)

    Tang, Wenjing; Hao, Yanling; Zhao, Yuxin; Li, Ning

    2008-10-01

    The disparities of features that represent the same real world entities from disparate sources usually occur, thus the identification or matching of features is crutial to the map conflation. Motivated by the idea of identifying the same entities through integrating known information by eyes, the feature matching algorithm based on spatial similarity is proposed in this paper. Total similarity is obtained by integrating positional similarity, shape similarity and size similarity with a weighted average algorithm, then the matching entities is achieved according to the maximum total similarity. The matching of areal features is analyzed in detail. Regarding the areal feature as a whole, the proposed algorithm identifies the same areal features by their shape-center points in order to calculate their positional similarity, and shape similarity is given by the function of describing the shape, which ensures its precision not be affected by interferes and avoids the loss of shape information, furthermore the size of areal features is measured by their covered areas. Test results show the stability and reliability of the proposed algorithm, and its precision and recall are higher than other matching algorithm.

  3. Predicting missing links via structural similarity

    NASA Astrophysics Data System (ADS)

    Lyu, Guo-Dong; Fan, Chang-Jun; Yu, Lian-Fei; Xiu, Bao-Xin; Zhang, Wei-Ming

    2015-04-01

    Predicting missing links in networks plays a significant role in modern science. On the basis of structural similarity, our paper proposes a new node-similarity-based measure called biased resource allocation (BRA), which is motivated by the resource allocation (RA) measure. Comparisons between BRA and nine well-known node-similarity-based measures on five real networks indicate that BRA performs no worse than RA, which was the best node-similarity-based index in previous researches. Afterwards, based on localPath (LP) and Katz measure, we propose another two improved measures, named Im-LocalPath and Im-Katz respectively. Numerical results show that the prediction accuracy of both Im-LP and Im-Katz measure improve compared with the original LP and Katz measure. Finally, a new path-similarity-based measure and its improved measure, called LYU and Im-LYU measure, are proposed and especially, Im-LYU measure is shown to perform more remarkably than other mentioned measures.

  4. Bilateral Trade Flows and Income Distribution Similarity

    PubMed Central

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  5. Children with Autism Detect Targets at Very Rapid Presentation Rates with Similar Accuracy as Adults

    ERIC Educational Resources Information Center

    Hagmann, Carl Erick; Wyble, Bradley; Shea, Nicole; LeBlanc, Megan; Kates, Wendy R.; Russo, Natalie

    2016-01-01

    Enhanced perception may allow for visual search superiority by individuals with Autism Spectrum Disorder (ASD), but does it occur over time? We tested high-functioning children with ASD, typically developing (TD) children, and TD adults in two tasks at three presentation rates (50, 83.3, and 116.7 ms/item) using rapid serial visual presentation.…

  6. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  7. Accuracy of distance measurements in biplane angiography

    NASA Astrophysics Data System (ADS)

    Toennies, Klaus D.; Oishi, Satoru; Koster, David; Schroth, Gerhard

    1997-05-01

    Distance measurements of the vascular system of the brain can be derived from biplanar digital subtraction angiography (2p-DSA). The measurements are used for planning of minimal invasive surgical procedures. Our 90 degree-fixed-angle G- ring angiography system has the potential of acquiring pairs of such images with high geometric accuracy. The sizes of vessels and aneurysms are estimated applying a fast and accurate extraction method in order to select an appropriate surgical strategy. Distance computation from 2p-DSA is carried out in three steps. First, the boundary of the structure to be measured is detected based on zero-crossings and closeness to user-specified end points. Subsequently, the 3D location of the center of the structure is computed from the centers of gravity of its two projections. This location is used to reverse the magnification factor caused by the cone-shaped projection of the x-rays. Since exact measurements of possibly very small structures are crucial to the usefulness in surgical planning, we identified mechanical and computational influences on the geometry which may have an impact on the measurement accuracy. A study with phantoms is presented distinguishing between the different effects and enabling the computation of an optimal overall exactness. Comparing this optimum with results of distance measurements on phantoms whose exact size and shape is known, we found, that the measurement error for structures of size of 20 mm was less than 0.05 mm on average and 0.50 mm at maximum. The maximum achievable accuracy of 0.15 mm was in most cases exceeded by less than 0.15 mm. This accuracy surpasses by far the requirements for the above mentioned surgery application. The mechanic accuracy of the fixed-angle biplanar system meets the requirements for computing a 3D reconstruction of the small vessels of the brain. It also indicates, that simple measurements will be possible on systems being less accurate.

  8. The Effect of Moderate and High-Intensity Fatigue on Groundstroke Accuracy in Expert and Non-Expert Tennis Players

    PubMed Central

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in

  9. The effect of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players.

    PubMed

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis

  10. Leader as achiever.

    PubMed

    Dienemann, Jacqueline

    2002-01-01

    This article examines one outcome of leadership: productive achievement. Without achievement one is judged to not truly be a leader. Thus, the ideal leader must be a visionary, a critical thinker, an expert, a communicator, a mentor, and an achiever of organizational goals. This article explores the organizational context that supports achievement, measures of quality nursing care, fiscal accountability, leadership development, rewards and punishments, and the educational content and teaching strategies to prepare graduates to be achievers.

  11. Improving Human-Machine Cooperative Classification Via Cognitive Theories of Similarity.

    PubMed

    Roads, Brett D; Mozer, Michael C

    2016-07-22

    Acquiring perceptual expertise is slow and effortful. However, untrained novices can accurately make difficult classification decisions (e.g., skin-lesion diagnosis) by reformulating the task as similarity judgment. Given a query image and a set of reference images, individuals are asked to select the best matching reference. When references are suitably chosen, the procedure yields an implicit classification of the query image. To optimize reference selection, we develop and evaluate a predictive model of similarity-based choice. The model builds on existing psychological literature and accommodates stochastic, dynamic shifts of attention among visual feature dimensions. We perform a series of human experiments with two stimulus types (rectangles, faces) and nine classification tasks to validate the model and to demonstrate the model's potential to boost performance. Our system achieves high accuracy for participants who are naive as to the classification task, even when the classification task switches from trial to trial.

  12. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  13. NoisyGOA: Noisy GO annotations prediction using taxonomic and semantic similarity.

    PubMed

    Lu, Chang; Wang, Jun; Zhang, Zili; Yang, Pengyi; Yu, Guoxian

    2016-12-01

    Gene Ontology (GO) provides GO annotations (GOA) that associate gene products with GO terms that summarize their cellular, molecular and functional aspects in the context of biological pathways. GO Consortium (GOC) resorts to various quality assurances to ensure the correctness of annotations. Due to resources limitations, only a small portion of annotations are manually added/checked by GO curators, and a large portion of available annotations are computationally inferred. While computationally inferred annotations provide greater coverage of known genes, they may also introduce annotation errors (noise) that could mislead the interpretation of the gene functions and their roles in cellular and biological processes. In this paper, we investigate how to identify noisy annotations, a rarely addressed problem, and propose a novel approach called NoisyGOA. NoisyGOA first measures taxonomic similarity between ontological terms using the GO hierarchy and semantic similarity between genes. Next, it leverages the taxonomic similarity and semantic similarity to predict noisy annotations. We compare NoisyGOA with other alternative methods on identifying noisy annotations under different simulated cases of noisy annotations, and on archived GO annotations. NoisyGOA achieved higher accuracy than other alternative methods in comparison. These results demonstrated both taxonomic similarity and semantic similarity contribute to the identification of noisy annotations. Our study shows that annotation errors are predictable and removing noisy annotations improves the performance of gene function prediction. This study can prompt the community to study methods for removing inaccurate annotations, a critical step for annotating gene and pathway functions.

  14. Learning similarity with multikernel method.

    PubMed

    Tang, Yi; Li, Luoqing; Li, Xuelong

    2011-02-01

    In the field of machine learning, it is a key issue to learn and represent similarity. This paper focuses on the problem of learning similarity with a multikernel method. Motivated by geometric intuition and computability, similarity between patterns is proposed to be measured by their included angle in a kernel-induced Hilbert space. Having noticed that the cosine of such an included angle can be represented by a normalized kernel, it can be said that the task of learning similarity is equivalent to learning an appropriate normalized kernel. In addition, an error bound is also established for learning similarity with the multikernel method. Based on this bound, a boosting-style algorithm is developed. The preliminary experiments validate the effectiveness of the algorithm for learning similarity.

  15. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  16. Inferring Trust Based on Similarity with TILLIT

    NASA Astrophysics Data System (ADS)

    Tavakolifard, Mozhgan; Herrmann, Peter; Knapskog, Svein J.

    A network of people having established trust relations and a model for propagation of related trust scores are fundamental building blocks in many of today’s most successful e-commerce and recommendation systems. However, the web of trust is often too sparse to predict trust values between non-familiar people with high accuracy. Trust inferences are transitive associations among users in the context of an underlying social network and may provide additional information to alleviate the consequences of the sparsity and possible cold-start problems. Such approaches are helpful, provided that a complete trust path exists between the two users. An alternative approach to the problem is advocated in this paper. Based on collaborative filtering one can exploit the like-mindedness resp. similarity of individuals to infer trust to yet unknown parties which increases the trust relations in the web. For instance, if one knows that with respect to a specific property, two parties are trusted alike by a large number of different trusters, one can assume that they are similar. Thus, if one has a certain degree of trust to the one party, one can safely assume a very similar trustworthiness of the other one. In an attempt to provide high quality recommendations and proper initial trust values even when no complete trust propagation path or user profile exists, we propose TILLIT — a model based on combination of trust inferences and user similarity. The similarity is derived from the structure of the trust graph and users’ trust behavior as opposed to other collaborative-filtering based approaches which use ratings of items or user’s profile. We describe an algorithm realizing the approach based on a combination of trust inferences and user similarity, and validate the algorithm using a real large-scale data-set.

  17. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  18. High Accuracy Time Transfer Synchronization

    DTIC Science & Technology

    1994-12-01

    HIGH ACCURACY TIME TRANSFER SYNCHRONIZATION Paul Wheeler, Paul Koppang, David Chalmers, Angela Davis, Anthony Kubik and William Powell U.S. Naval...Observatory Washington, DC 20392 Abstract In July 1994, the US Naval Observatory (USNO) Time Service System Engineering Division conducted a...field test to establish a baseline accuracy for two-way satellite time transfer synchro- nization. Three Hewlett-Packard model 5071 high performance

  19. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  20. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  1. Does language about similarity play a role in fostering similarity comparison in children?

    PubMed

    Ozçalişkan, Seyda; Goldin-Meadow, Susan; Gentner, Dedre; Mylander, Carolyn

    2009-08-01

    Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input--deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (point to cat+point to tiger) resembling those conveyed by 40 hearing children in early gesture+speech combinations (cat+point to tiger). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.

  2. Modeling individual differences in response time and accuracy in numeracy.

    PubMed

    Ratcliff, Roger; Thompson, Clarissa A; McKoon, Gail

    2015-04-01

    In the study of numeracy, some hypotheses have been based on response time (RT) as a dependent variable and some on accuracy, and considerable controversy has arisen about the presence or absence of correlations between RT and accuracy, between RT or accuracy and individual differences like IQ and math ability, and between various numeracy tasks. In this article, we show that an integration of the two dependent variables is required, which we accomplish with a theory-based model of decision making. We report data from four tasks: numerosity discrimination, number discrimination, memory for two-digit numbers, and memory for three-digit numbers. Accuracy correlated across tasks, as did RTs. However, the negative correlations that might be expected between RT and accuracy were not obtained; if a subject was accurate, it did not mean that they were fast (and vice versa). When the diffusion decision-making model was applied to the data (Ratcliff, 1978), we found significant correlations across the tasks between the quality of the numeracy information (drift rate) driving the decision process and between the speed/accuracy criterion settings, suggesting that similar numeracy skills and similar speed-accuracy settings are involved in the four tasks. In the model, accuracy is related to drift rate and RT is related to speed-accuracy criteria, but drift rate and criteria are not related to each other across subjects. This provides a theoretical basis for understanding why negative correlations were not obtained between accuracy and RT. We also manipulated criteria by instructing subjects to maximize either speed or accuracy, but still found correlations between the criteria settings between and within tasks, suggesting that the settings may represent an individual trait that can be modulated but not equated across subjects. Our results demonstrate that a decision-making model may provide a way to reconcile inconsistent and sometimes contradictory results in numeracy

  3. Does Language about Similarity Play a Role in Fostering Similarity Comparison in Children?

    ERIC Educational Resources Information Center

    Ozcaliskan, Seyda; Goldin-Meadow, Susan; Gentner, Dedre; Mylander, Carolyn

    2009-01-01

    Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of…

  4. Discuss Similarity Using Visual Intuition

    ERIC Educational Resources Information Center

    Cox, Dana C.; Lo, Jane-Jane

    2012-01-01

    The change in size from a smaller shape to a larger similar shape (or vice versa) is created through continuous proportional stretching or shrinking in every direction. Students cannot solve similarity tasks simply by iterating or partitioning a composed unit, strategies typically used on numerical proportional tasks. The transition to thinking…

  5. Dynamic similarity in erosional processes

    USGS Publications Warehouse

    Scheidegger, A.E.

    1963-01-01

    A study is made of the dynamic similarity conditions obtaining in a variety of erosional processes. The pertinent equations for each type of process are written in dimensionless form; the similarity conditions can then easily be deduced. The processes treated are: raindrop action, slope evolution and river erosion. ?? 1963 Istituto Geofisico Italiano.

  6. Limits on the Accuracy of Linking. Research Report. ETS RR-10-22

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2010-01-01

    Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…

  7. TRASYS: Checkout of accuracy of direct irradiation calculations for discs, trapezoids, cones, and circular paraboloids

    NASA Technical Reports Server (NTRS)

    Mackeen, R. C.

    1977-01-01

    Results of the direct irradiation link of the TRASYS program are evaluated. Several surface configurations were investigated. The accuracy of the results was examined for simple cases where the answers were analytically known. By varying an accuracy factor in the program, the amount of computer time needed to achieve different degress of accuracy was determined.

  8. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  9. Towards personalized medicine: leveraging patient similarity and drug similarity analytics.

    PubMed

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.

  10. Comparing Science Achievement Constructs: Targeted and Achieved

    ERIC Educational Resources Information Center

    Ferrara, Steve; Duncan, Teresa

    2011-01-01

    This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…

  11. Which Achievement Gap?

    ERIC Educational Resources Information Center

    Anderson, Sharon; Medrich, Elliott; Fowler, Donna

    2007-01-01

    From the halls of Congress to the local elementary school, conversations on education reform have tossed around the term "achievement gap" as though people all know precisely what that means. As it's commonly used, "achievement gap" refers to the differences in scores on state or national achievement tests between various…

  12. Renewing the respect for similarity.

    PubMed

    Edelman, Shimon; Shahbazi, Reza

    2012-01-01

    In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemming from its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problem at hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, by surveying established results and new developments in the theory and methods of similarity-preserving associative lookup and dimensionality reduction-critical components of many cognitive functions, as well as of intelligent data management in computer vision. We focus in particular on the growing family of algorithms that support associative memory by performing hashing that respects local similarity, and on the uses of similarity in representing structured objects and scenes. Insofar as these similarity-based ideas and methods are useful in cognitive modeling and in AI applications, they should be included in the core conceptual toolkit of computational neuroscience. In support of this stance, the present paper (1) offers a discussion of conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (2) mentions some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (3) briefly states a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (4) presents new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing (LSH) and to concomitant statistics, (5) introduces a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation, (6) describes its implementation and testing, and finally (7) suggests possible directions in which the present research program can be

  13. Renewing the respect for similarity

    PubMed Central

    Edelman, Shimon; Shahbazi, Reza

    2012-01-01

    In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemming from its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problem at hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, by surveying established results and new developments in the theory and methods of similarity-preserving associative lookup and dimensionality reduction—critical components of many cognitive functions, as well as of intelligent data management in computer vision. We focus in particular on the growing family of algorithms that support associative memory by performing hashing that respects local similarity, and on the uses of similarity in representing structured objects and scenes. Insofar as these similarity-based ideas and methods are useful in cognitive modeling and in AI applications, they should be included in the core conceptual toolkit of computational neuroscience. In support of this stance, the present paper (1) offers a discussion of conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (2) mentions some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (3) briefly states a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (4) presents new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing (LSH) and to concomitant statistics, (5) introduces a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation, (6) describes its implementation and testing, and finally (7) suggests possible directions in which the present research program can be

  14. Self-similar aftershock rates

    NASA Astrophysics Data System (ADS)

    Davidsen, Jörn; Baiesi, Marco

    2016-08-01

    In many important systems exhibiting crackling noise—an intermittent avalanchelike relaxation response with power-law and, thus, self-similar distributed event sizes—the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is particularly true for the case of seismicity, and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high-resolution earthquake data from Southern California we find excellent agreement, providing particularly clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved framework for time-dependent seismic hazard assessment and earthquake forecasting.

  15. Self-similar aftershock rates.

    PubMed

    Davidsen, Jörn; Baiesi, Marco

    2016-08-01

    In many important systems exhibiting crackling noise-an intermittent avalanchelike relaxation response with power-law and, thus, self-similar distributed event sizes-the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is particularly true for the case of seismicity, and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high-resolution earthquake data from Southern California we find excellent agreement, providing particularly clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved framework for time-dependent seismic hazard assessment and earthquake forecasting.

  16. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes.

  17. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  18. Accuracy of Stokes integration for geoid computation

    NASA Astrophysics Data System (ADS)

    Ismail, Zahra; Jamet, Olivier; Altamimi, Zuheir

    2014-05-01

    Geoid determination by remove-compute-restore (RCR) technique involves the application of Stokes's integral on reduced gravity anomalies. Reduced gravity anomalies are obtained through interpolation after removing low degree gravity signal from space spherical harmonic model and high frequency from topographical effects and cover a spectre ranging from degree 150-200. Stokes's integral is truncated to a limited region around the computation point producing an error that will be reducing by a modification of Stokes's kernel. We study Stokes integral accuracy on synthetic signal of various frequency ranges, produced with EGM2008 spherical harmonic coefficients up to degree 2000. We analyse the integration error according to the frequency range of signal, the resolution of gravity anomaly grid and the radius of Stokes integration. The study shows that the behaviour of the relative errors is frequency independent. The standard Stokes kernel is though insufficient to produce 1cm geoid accuracy without a removal of the major part of the gravity signal up to degree 600. The Integration over an area of radius greater than 3 degree does not improve accuracy improvement. The results are compared to a similar experiment using the modified Stokes kernel formula (Ellmann2004, Sjöberg2003). References: Ellmann, A. (2004) The geoid for the Baltic countries determined by least-squares modification of Stokes formula. Sjöberg, LE (2003). A general model of modifying Stokes formula and its least-squares solution Journal of Geodesy, 77. 459-464.

  19. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  20. Similarity of the Velocity Profile

    DTIC Science & Technology

    2014-10-01

    su x (with 0 constantb = ) is the empirically derived velocity scale developed by Zagarola and Smits [5] for turbulent boundary layer flow...Zagarola and Smits and others have shown that the velocity scaling factor given by Eq. 5 with sδ as the boundary layer thickness can collapse certain...and Smits , it is important to point out that the fact that the similarity length scale factor and the similarity velocity scale factor must follow

  1. Investigations of dipole localization accuracy in MEG using the bootstrap.

    PubMed

    Darvas, F; Rautiainen, M; Pantazis, D; Baillet, S; Benali, H; Mosher, J C; Garnero, L; Leahy, R M

    2005-04-01

    We describe the use of the nonparametric bootstrap to investigate the accuracy of current dipole localization from magnetoencephalography (MEG) studies of event-related neural activity. The bootstrap is well suited to the analysis of event-related MEG data since the experiments are repeated tens or even hundreds of times and averaged to achieve acceptable signal-to-noise ratios (SNRs). The set of repetitions or epochs can be viewed as a set of independent realizations of the brain's response to the experiment. Bootstrap resamples can be generated by sampling with replacement from these epochs and averaging. In this study, we applied the bootstrap resampling technique to MEG data from somatotopic experimental and simulated data. Four fingers of the right and left hand of a healthy subject were electrically stimulated, and about 400 trials per stimulation were recorded and averaged in order to measure the somatotopic mapping of the fingers in the S1 area of the brain. Based on single-trial recordings for each finger we performed 5000 bootstrap resamples. We reconstructed dipoles from these resampled averages using the Recursively Applied and Projected (RAP)-MUSIC source localization algorithm. We also performed a simulation for two dipolar sources with overlapping time courses embedded in realistic background brain activity generated using the prestimulus segments of the somatotopic data. To find correspondences between multiple sources in each bootstrap, sample dipoles with similar time series and forward fields were assumed to represent the same source. These dipoles were then clustered by a Gaussian Mixture Model (GMM) clustering algorithm using their combined normalized time series and topographies as feature vectors. The mean and standard deviation of the dipole position and the dipole time series in each cluster were computed to provide estimates of the accuracy of the reconstructed source locations and time series.

  2. Improving Student Achievement: A Study of High-Poverty Schools with Higher Student Achievement Outcomes

    ERIC Educational Resources Information Center

    Butz, Stephen D.

    2012-01-01

    This research examined the education system at high-poverty schools that had significantly higher student achievement levels as compared to similar schools with lower student achievement levels. A multischool qualitative case study was conducted of the educational systems where there was a significant difference in the scores achieved on the…

  3. Quantifying Similarity in Seismic Polarizations

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.; Jones, J. P.; Caffagni, E.

    2015-12-01

    Measuring similarity in seismic attributes can help identify tremor, low S/N signals, and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via. computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in signal-to-noise (S/N) ratio. Using records of the Mw=8.3 Sea of Okhotsk earthquake from CNSN broadband sensors in British Columbia and Yukon Territory, Canada, and vertical borehole array data from a monitoring experiment at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Because histogram distance metrics are bounded by [0 1], clustering allows empirical time-frequency separation of seismic phase arrivals on single-station three-component records. Array processing for automatic seismic phase classification may be possible using subspace clustering of polarization similarity, but efficient algorithms are required to reduce the dimensionality.

  4. What difference reveals about similarity.

    PubMed

    Sagi, Eyal; Gentner, Dedre; Lovett, Andrew

    2012-08-01

    Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, "different" responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison.

  5. Teachers' Judgements of Students' Foreign-Language Achievement

    ERIC Educational Resources Information Center

    Zhu, Mingjing; Urhahne, Detlef

    2015-01-01

    Numerous studies have been conducted on the accuracy of teacher judgement in different educational areas such as mathematics, language arts and reading. Teacher judgement of students' foreign-language achievement, however, has been rarely investigated. The study aimed to examine the accuracy of teacher judgement of students' foreign-language…

  6. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  7. A new hybrid coding for protein secondary structure prediction based on primary structure similarity.

    PubMed

    Li, Zhong; Wang, Jing; Zhang, Shunpu; Zhang, Qifeng; Wu, Wuming

    2017-03-16

    The coding pattern of protein can greatly affect the prediction accuracy of protein secondary structure. In this paper, a novel hybrid coding method based on the physicochemical properties of amino acids and tendency factors is proposed for the prediction of protein secondary structure. The principal component analysis (PCA) is first applied to the physicochemical properties of amino acids to construct a 3-bit-code, and then the 3 tendency factors of amino acids are calculated to generate another 3-bit-code. Two 3-bit-codes are fused to form a novel hybrid 6-bit-code. Furthermore, we make a geometry-based similarity comparison of the protein primary structure between the reference set and the test set before the secondary structure prediction. We finally use the support vector machine (SVM) to predict those amino acids which are not detected by the primary structure similarity comparison. Experimental results show that our method achieves a satisfactory improvement in accuracy in the prediction of protein secondary structure.

  8. Astronomic Position Accuracy Capability Study.

    DTIC Science & Technology

    1979-10-01

    portion of F. E. Warren AFB, Wyoming. The three points were called THEODORE ECC , TRACY, and JIM and consisted of metal tribrachs plastered to cinder...sets were computed as a deviation from the standard. Accuracy figures were determined from these residuals. Homo - geneity of variances was tested using

  9. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  10. Inventory accuracy in 60 days!

    PubMed

    Miller, G J

    1997-08-01

    Despite great advances in manufacturing technology and management science, thousands of organizations still don't have a handle on basic inventory accuracy. Many companies don't even measure it properly, or at all, and lack corrective action programs to improve it. This article offers an approach that has proven successful a number of times, when companies were quite serious about making improvements. Not only can it be implemented, but also it can likely be implemented within 60 days per area, if properly managed. The hardest part is selling people on the need to improve and then keeping them motivated. The net cost of such a program? Probably less than nothing, since the benefits gained usually far exceed the costs. Improved inventory accuracy can aid in enhancing customer service, determining purchasing and manufacturing priorities, reducing operating costs, and increasing the accuracy of financial records. This article also addresses the gap in contemporary literature regarding accuracy program features for repetitive, JIT, cellular, and process- and project-oriented environments.

  11. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  12. Comparison of hydrological similarity measures

    NASA Astrophysics Data System (ADS)

    Rianna, Maura; Ridolfi, Elena; Manciola, Piergiorgio; Napolitano, Francesco; Russo, Fabio

    2016-04-01

    The use of a traditional at site approach for the statistical characterization and simulation of spatio-temporal precipitation fields has a major recognized drawback. Indeed, the weakness of the methodology is related to the estimation of rare events and it involves the uncertainty of the at-site sample statistical inference, because of the limited length of records. In order to overcome the lack of at-site observations, regional frequency approach uses the idea of substituting space for time to estimate design floods. The conventional regional frequency analysis estimates quantile values at a specific site from multi-site analysis. The main idea is that homogeneous sites, once pooled together, have similar probability distribution curves of extremes, except for a scaling factor. The method for pooling groups of sites can be based on geographical or climatological considerations. In this work the region of influence (ROI) pooling method is compared with an entropy-based one. The ROI is a flexible pooling group approach which defines for each site its own "region" formed by a unique set of similar stations. The similarity is found through the Euclidean distance metric in the attribute space. Here an alternative approach based on entropy is introduced to cluster homogeneous sites. The core idea is that homogeneous sites share a redundant (i.e. similar) amount of information. Homogeneous sites are pooled through a hierarchical selection based on the mutual information index (i.e. a measure of redundancy). The method is tested on precipitation data in Central Italy area.

  13. What Difference Reveals about Similarity

    ERIC Educational Resources Information Center

    Sagi, Eyal; Gentner, Dedre; Lovett, Andrew

    2012-01-01

    Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe "how" two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this…

  14. Phylogenetic metrics of community similarity.

    PubMed

    Ives, Anthony R; Helmus, Matthew R

    2010-11-01

    We derive a new metric of community similarity that takes into account the phylogenetic relatedness among species. This metric, phylogenetic community dissimilarity (PCD), can be partitioned into two components, a nonphylogenetic component that reflects shared species between communities (analogous to Sørensen' s similarity metric) and a phylogenetic component that reflects the evolutionary relationships among nonshared species. Therefore, even if a species is not shared between two communities, it will increase the similarity of the two communities if it is phylogenetically related to species in the other community. We illustrate PCD with data on fish and aquatic macrophyte communities from 59 temperate lakes. Dissimilarity between fish communities associated with environmental differences between lakes often has a phylogenetic component, whereas this is not the case for macrophyte communities. With simulations, we then compare PCD with two other metrics of phylogenetic community similarity, II(ST) and UniFrac. Of the three metrics, PCD was best at identifying environmental drivers of community dissimilarity, showing lower variability and greater statistical power. Thus, PCD is a statistically powerful metric that separates the effects of environmental drivers on compositional versus phylogenetic components of community structure.

  15. Ignore Similarity If You Can: A Computational Exploration of Exemplar Similarity Effects on Rule Application

    PubMed Central

    Brumby, Duncan P.; Hahn, Ulrike

    2017-01-01

    It is generally assumed that when making categorization judgments the cognitive system learns to focus on stimuli features that are relevant for making an accurate judgment. This is a key feature of hybrid categorization systems, which selectively weight the use of exemplar- and rule-based processes. In contrast, Hahn et al. (2010) have shown that people cannot help but pay attention to exemplar similarity, even when doing so leads to classification errors. This paper tests, through a series of computer simulations, whether a hybrid categorization model developed in the ACT-R cognitive architecture (by Anderson and Betz, 2001) can account for the Hahn et al. dataset. This model implements Nosofsky and Palmeri’s (1997) exemplar-based random walk model as its exemplar route, and combines it with an implementation of Nosofsky et al. (1994) rule-based model RULEX. A thorough search of the model’s parameter space showed that while the presence of an exemplar-similarity effect on response times was associated with classification errors it was possible to fit both measures to the observed data for an unsupervised version of the task (i.e., in which no feedback on accuracy was given). Difficulties arose when the model was applied to a supervised version of the task in which explicit feedback on accuracy was given. Modeling results show that the exemplar-similarity effect is diminished by feedback as the model learns to avoid the error-prone exemplar-route, taking instead the accurate rule-route. In contrast to the model, Hahn et al. found that people continue to exhibit robust exemplar-similarity effects even when given feedback. This work highlights a challenge for understanding how and why people combine rules and exemplars when making categorization decisions. PMID:28377739

  16. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  17. What causes similarity in catchments?

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert

    2014-05-01

    One of the biggest issues in hydrology is how to handle the heterogeneity of catchment properties at different scales. But is this really such a big issue? Is this problem not merely the consequence of how we conceptualise and how we model catchments? Is there not far more similarity than we observe. Maybe we are not looking at the right things or at the right scale to see the similarity. The identity of catchments is largely determined by: the landscape, the ecosystem living on the landscape, and the geology, in that order. Soils, which are often seen as a crucial aspect of hydrological behaviour, are far less important, as will be demonstrated. The main determinants of hydrological behaviour are: the landscape composition, the rooting depth and the phenology. These determinants are a consequence of landscape and ecosystem evolution, which, in turn, are the manifestations of entropy production. There are striking similarities between catchments. The different runoff processes from hillslopes are linked and similar in different environments (McDonnell, 2013). Wetlands behave similarly all over the world. The key is to classify landscapes and to link the ecosystems living on them to climate. The ecosystem then is the main controller of hydrological behaviour. Besides phenology, the rooting depth is key in determining runoff behaviour. Both are strongly linked to climate and much less to soil properties. An example is given of how rooting depth is determined by climate, and how rooting depth can be predicted without calibration, providing a strong constraints on the prediction of rainfall partitioning and catchment runoff.

  18. 'No delays achiever'.

    PubMed

    2007-05-01

    The latest version of the NHS Institute for Innovation and Improvement's 'no delays achiever', a web based tool created to help NHS organisations achieve the 18-week target for GP referrals to first treatment, is available at www.nodelaysachiever.nhs.uk.

  19. Vicarious Achievement Orientation.

    ERIC Educational Resources Information Center

    Leavitt, Harold J.; And Others

    This study tests hypotheses about achievement orientation, particularly vicarious achievement. Undergraduate students (N=437) completed multiple-choice questionnaires, indicating likely responses of one person to the success of another. The sex of succeeder and observer, closeness of relationship, and setting (medical school or graduate school of…

  20. Heritability of Creative Achievement

    ERIC Educational Resources Information Center

    Piffer, Davide; Hur, Yoon-Mi

    2014-01-01

    Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…

  1. Confronting the Achievement Gap

    ERIC Educational Resources Information Center

    Gardner, David

    2007-01-01

    This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…

  2. Achievement-Based Resourcing.

    ERIC Educational Resources Information Center

    Fletcher, Mike; And Others

    1992-01-01

    This collection of seven articles examines achievement-based resourcing (ABR), the concept that the funding of educational institutions should be linked to their success in promoting student achievement, with a focus on the application of ABR to postsecondary education in the United Kingdom. The articles include: (1) "Introduction" (Mick…

  3. States Address Achievement Gaps.

    ERIC Educational Resources Information Center

    Christie, Kathy

    2002-01-01

    Summarizes 2 state initiatives to address the achievement gap: North Carolina's report by the Advisory Commission on Raising Achievement and Closing Gaps, containing an 11-point strategy, and Kentucky's legislation putting in place 10 specific processes. The North Carolina report is available at www.dpi.state.nc.us.closingthegap; Kentucky's…

  4. Improving classification accuracy and causal knowledge for better credit decisions.

    PubMed

    Wu, Wei-Wen

    2011-08-01

    Numerous studies have contributed to efforts to boost the accuracy of the credit scoring model. Especially interesting are recent studies which have successfully developed the hybrid approach, which advances classification accuracy by combining different machine learning techniques. However, to achieve better credit decisions, it is not enough merely to increase the accuracy of the credit scoring model. It is necessary to conduct meaningful supplementary analyses in order to obtain knowledge of causal relations, particularly in terms of significant conceptual patterns or structures involving attributes used in the credit scoring model. This paper proposes a solution of integrating data preprocessing strategies and the Bayesian network classifier with the tree augmented Na"ıve Bayes search algorithm, in order to improve classification accuracy and to obtain improved knowledge of causal patterns, thus enhancing the validity of credit decisions.

  5. Climate Change Accuracy: Requirements and Economic Value

    NASA Astrophysics Data System (ADS)

    Wielicki, B. A.; Cooke, R.; Mlynczak, M. G.; Lukashin, C.; Thome, K. J.; Baize, R. R.

    2014-12-01

    Higher than normal accuracy is required to rigorously observe decadal climate change. But what level is needed? How can this be quantified? This presentation will summarize a new more rigorous and quantitative approach to determining the required accuracy for climate change observations (Wielicki et al., 2013, BAMS). Most current global satellite observations cannot meet this accuracy level. A proposed new satellite mission to resolve this challenge is CLARREO (Climate Absolute Radiance and Refractivity Observatory). CLARREO is designed to achieve advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra (Wielicki et al., Oct. 2013 BAMS). The CLARREO spectrometers are designed to serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and to greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar passive satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, SPOT, etc). Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A study has been carried out to quantify the economic benefits of such an advance as part of a rigorous and complete climate observing system. The study concludes that the economic value is $12 Trillion U.S. dollars in Net Present Value for a nominal discount rate of 3% (Cooke et al. 2013, J. Env. Sys. Dec.). A brief summary of these two studies and their implications for the future of climate science will be presented.

  6. Similarity considerations in one-component two-phase flow

    SciTech Connect

    Maeder, P.F.; DiPippo, R.; Dickinson, D.A.; Nikitopoulos, D.E.

    1984-07-01

    The simplified model fluid presented here for two-phase flow can serve as a basis for the similarity analysis of a variety of substance flows. For the special case of water and R114, it is seen that exact similarity does not exist in the range of interest for geothermal applications, but that conditions can be found for reasonable similarity which permit one to replace water with R114 in laboratory-size apparatus. Thus experimental data and results obtained using R114 in a properly scaled laboratory setup can be converted with reasonable accuracy to those for water.

  7. A Dimensionality Reduction Technique for Efficient Time Series Similarity Analysis

    PubMed Central

    Wang, Qiang; Megalooikonomou, Vasileios

    2008-01-01

    We propose a dimensionality reduction technique for time series analysis that significantly improves the efficiency and accuracy of similarity searches. In contrast to piecewise constant approximation (PCA) techniques that approximate each time series with constant value segments, the proposed method--Piecewise Vector Quantized Approximation--uses the closest (based on a distance measure) codeword from a codebook of key-sequences to represent each segment. The new representation is symbolic and it allows for the application of text-based retrieval techniques into time series similarity analysis. Experiments on real and simulated datasets show that the proposed technique generally outperforms PCA techniques in clustering and similarity searches. PMID:18496587

  8. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  9. Accuracy of implant impression techniques.

    PubMed

    Assif, D; Marshak, B; Schmidt, A

    1996-01-01

    Three impression techniques were assessed for accuracy in a laboratory cast that simulated clinical practice. The first technique used autopolymerizing acrylic resin to splint the transfer copings. The second involved splinting of the transfer copings directly to an acrylic resin custom tray. In the third, only impression material was used to orient the transfer copings. The accuracy of stone casts with implant analogs was measured against a master framework. The fit of the framework on the casts was tested using strain gauges. The technique using acrylic resin to splint transfer copings in the impression material was significantly more accurate than the two other techniques. Stresses observed in the framework are described and discussed with suggestions to improve clinical and laboratory techniques.

  10. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  11. Use Of Image Similarity For The Selection Or Synthesis Of Projections For Subtraction Radiography

    NASA Astrophysics Data System (ADS)

    Ruttimann, Urs E.; van der Stelt, Paul F.; Webber, Richard L.

    1986-06-01

    The use of subtraction radiography in dentistry is impeded by the necessity to couple physically the x-ray source, the patient and the film, in order to achieve a reproducible projection geometry. This need can he obviated by the ability to synthesize arbitrary projection images from a basis set of projections bearing a known geometric relationship to each other. Implementation of this method requires knowledge of the projection angle of the desired projection image relative to the basis set. This investigation explores the feasibility of using the gray-level standard deviations in corresponding subtraction images as similarity measures, in order to determine retrospectively the projection angle of a radiograph of interest with respect to the set of basis projections. An iterative coordinate estimation procedure is developed incorporating this technique, and its accuracy is evaluated using radiographs obtained from dry skull specimens.

  12. Culture and Achievement Motivation

    ERIC Educational Resources Information Center

    Maehr, Martin L.

    1974-01-01

    A framework is suggested for the cross-cultural study of motivation that stresses the importance of contextual conditions in eliciting achievement motivation and emphasizes cultural relativity in the definition of the concept. (EH)

  13. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  14. Mathematical Thinking of Kindergarten Boys and Girls: Similar Achievement, Different Contributing Processes

    ERIC Educational Resources Information Center

    Klein, Pnina S.; Adi-Japha, Esther; Hakak-Benizri, Simcha

    2010-01-01

    The objective of this study was to examine gender differences in the relations between verbal, spatial, mathematics, and teacher-child mathematics interaction variables. Kindergarten children (N = 80) were videotaped playing games that require mathematical reasoning in the presence of their teachers. The children's mathematics, spatial, and verbal…

  15. Methods to Calculate Spectrum Similarity.

    PubMed

    Yilmaz, Şule; Vandermarliere, Elien; Martens, Lennart

    2017-01-01

    Scoring functions that assess spectrum similarity play a crucial role in many computational mass spectrometry algorithms. These functions are used to compare an experimentally acquired fragmentation (MS/MS) spectrum against two different types of target MS/MS spectra: either against a theoretical MS/MS spectrum derived from a peptide from a sequence database, or against another, previously acquired MS/MS spectrum. The former is typically encountered in database searching, while the latter is used in spectrum clustering and spectral library searching. The comparison between acquired versus theoretical MS/MS spectra is most commonly performed using cross-correlations or probability derived scoring functions, while the comparison of two acquired MS/MS spectra typically makes use of a normalized dot product, especially in spectrum library search algorithms. In addition to these scoring functions, Pearson's or Spearman's correlation coefficients, mean squared error, or median absolute deviation scores can also be used for the same purpose. Here, we describe and evaluate these scoring functions with regards to their ability to assess spectrum similarity for theoretical versus acquired, and acquired versus acquired spectra.

  16. Mechanisms for similarity based cooperation

    NASA Astrophysics Data System (ADS)

    Traulsen, A.

    2008-06-01

    Cooperation based on similarity has been discussed since Richard Dawkins introduced the term “green beard” effect. In these models, individuals cooperate based on an aribtrary signal (or tag) such as the famous green beard. Here, two different models for such tag based cooperation are analysed. As neutral drift is important in both models, a finite population framework is applied. The first model, which we term “cooperative tags” considers a situation in which groups of cooperators are formed by some joint signal. Defectors adopting the signal and exploiting the group can lead to a breakdown of cooperation. In this case, conditions are derived under which the average abundance of the more cooperative strategy exceeds 50%. The second model considers a situation in which individuals start defecting towards others that are not similar to them. This situation is termed “defective tags”. It is shown that in this case, individuals using tags to cooperate exclusively with their own kind dominate over unconditional cooperators.

  17. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity.

  18. Interneurons targeting similar layers receive synaptic inputs with similar kinetics.

    PubMed

    Cossart, Rosa; Petanjek, Zdravko; Dumitriu, Dani; Hirsch, June C; Ben-Ari, Yehezkel; Esclapez, Monique; Bernard, Christophe

    2006-01-01

    GABAergic interneurons play diverse and important roles in controlling neuronal network dynamics. They are characterized by an extreme heterogeneity morphologically, neurochemically, and physiologically, but a functionally relevant classification is still lacking. Present taxonomy is essentially based on their postsynaptic targets, but a physiological counterpart to this classification has not yet been determined. Using a quantitative analysis based on multidimensional clustering of morphological and physiological variables, we now demonstrate a strong correlation between the kinetics of glutamate and GABA miniature synaptic currents received by CA1 hippocampal interneurons and the laminar distribution of their axons: neurons that project to the same layer(s) receive synaptic inputs with similar kinetics distributions. In contrast, the kinetics distributions of GABAergic and glutamatergic synaptic events received by a given interneuron do not depend upon its somatic location or dendritic arborization. Although the mechanisms responsible for this unexpected observation are still unclear, our results suggest that interneurons may be programmed to receive synaptic currents with specific temporal dynamics depending on their targets and the local networks in which they operate.

  19. Phylogenetically related and ecologically similar carnivores harbour similar parasite assemblages.

    PubMed

    Huang, Shan; Bininda-Emonds, Olaf R P; Stephens, Patrick R; Gittleman, John L; Altizer, Sonia

    2014-05-01

    Most parasites infect multiple hosts, but what factors determine the range of hosts a given parasite can infect? Understanding the broad scale determinants of parasite distributions across host lineages is important for predicting pathogen emergence in new hosts and for estimating pathogen diversity in understudied host species. In this study, we used a new data set on 793 parasite species reported from free-ranging populations of 64 carnivore species to examine the factors that influence parasite sharing between host species. Our results showed that parasites are more commonly shared between phylogenetically related host species pairs. Additionally, host species with higher similarity in biological traits and greater geographic range overlap were also more likely to share parasite species. Of three measures of phylogenetic relatedness considered here, the number divergence events that separated host species pairs most strongly influenced the likelihood of parasite sharing. We also showed that viruses and helminths tend to infect carnivore hosts within more restricted phylogenetic ranges than expected by chance. Overall, our results underscore the importance of host evolutionary history in determining parasite host range, even when simultaneously considering other factors such as host ecology and geographic distribution.

  20. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  1. Path similarity skeleton graph matching.

    PubMed

    Bai, Xiang; Latecki, Longin Jan

    2008-07-01

    This paper presents a novel framework to for shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the shortest paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we completely ignore the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of shortest paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with Discrete Curve Evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and occlusion.

  2. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-10-09

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  3. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  4. Voxel similarity measures for automated image registration

    NASA Astrophysics Data System (ADS)

    Hill, Derek L.; Studholme, Colin; Hawkes, David J.

    1994-09-01

    We present the concept of the feature space sequence: 2D distributions of voxel features of two images generated at registration and a sequence of misregistrations. We provide an explanation of the structure seen in these images. Feature space sequences have been generated for a pair of MR image volumes identical apart from the addition of Gaussian noise to one, MR image volumes with and without Gadolinium enhancement, MR and PET-FDG image volumes and MR and CT image volumes, all of the head. The structure seen in the feature space sequences was used to devise two new measures of similarity which in turn were used to produce plots of cost versus misregistration for the 6 degrees of freedom of rigid body motion. One of these, the third order moment of the feature space histogram, was used to register the MR image volumes with and without Gadolinium enhancement. These techniques have the potential for registration accuracy to within a small fraction of a voxel or resolution element and therefore interpolation errors in image transformation can be the dominant source of error in subtracted images. We present a method for removing these errors using sinc interpolation and show how interpolation errors can be reduced by over two orders of magnitude.

  5. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  6. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  7. Prediction of protein structural classes for low-similarity sequences using reduced PSSM and position-based secondary structural features.

    PubMed

    Wang, Junru; Wang, Cong; Cao, Jiajia; Liu, Xiaoqing; Yao, Yuhua; Dai, Qi

    2015-01-10

    Many efficient methods have been proposed to advance protein structural class prediction, but there are still some challenges where additional insight or technology is needed for low-similarity sequences. In this work, we schemed out a new prediction method for low-similarity datasets using reduced PSSM and position-based secondary structural features. We evaluated the proposed method with four experiments and compared it with the available competing prediction methods. The results indicate that the proposed method achieved the best performance among the evaluated methods, with overall accuracy 3-5% higher than the existing best-performing method. This paper also found that the reduced alphabets with size 13 simplify PSSM structures efficiently while reserving its maximal information. This understanding can be used to design more powerful prediction methods for protein structural class.

  8. Finding clusters of similar events within clinical incident reports: a novel methodology combining case based reasoning and information retrieval

    PubMed Central

    Tsatsoulis, C; Amthauer, H

    2003-01-01

    A novel methodological approach for identifying clusters of similar medical incidents by analyzing large databases of incident reports is described. The discovery of similar events allows the identification of patterns and trends, and makes possible the prediction of future events and the establishment of barriers and best practices. Two techniques from the fields of information science and artificial intelligence have been integrated—namely, case based reasoning and information retrieval—and very good clustering accuracies have been achieved on a test data set of incident reports from transfusion medicine. This work suggests that clustering should integrate the features of an incident captured in traditional form based records together with the detailed information found in the narrative included in event reports. PMID:14645892

  9. The application of similar image retrieval in electronic commerce.

    PubMed

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system.

  10. The Application of Similar Image Retrieval in Electronic Commerce

    PubMed Central

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  11. SALT and Spelling Achievement.

    ERIC Educational Resources Information Center

    Nelson, Joan

    A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…

  12. Iowa Women of Achievement.

    ERIC Educational Resources Information Center

    Ohrn, Deborah Gore, Ed.

    1993-01-01

    This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…

  13. Schools Achieving Gender Equity.

    ERIC Educational Resources Information Center

    Revis, Emma

    This guide is designed to assist teachers presenting the Schools Achieving Gender Equity (SAGE) curriculum for vocational education students, which was developed to align gender equity concepts with the Kentucky Education Reform Act (KERA). Included in the guide are lesson plans for classes on the following topics: legal issues of gender equity,…

  14. Achieving Peace through Education.

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…

  15. Explorations in achievement motivation

    NASA Technical Reports Server (NTRS)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  16. Increasing Male Academic Achievement

    ERIC Educational Resources Information Center

    Jackson, Barbara Talbert

    2008-01-01

    The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…

  17. Appraising Reading Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…

  18. Cognitive Processes and Achievement.

    ERIC Educational Resources Information Center

    Hunt, Dennis; Randhawa, Bikkar S.

    For a group of 165 fourth- and fifth-grade students, four achievement test scores were correlated with success on nine tests designed to measure three cognitive functions: sustained attention, successive processing, and simultaneous processing. This experiment was designed in accordance with Luria's model of the three functional units of the…

  19. Graders' Mathematics Achievement

    ERIC Educational Resources Information Center

    Bond, John B.; Ellis, Arthur K.

    2013-01-01

    The purpose of this experimental study was to investigate the effects of metacognitive reflective assessment instruction on student achievement in mathematics. The study compared the performance of 141 students who practiced reflective assessment strategies with students who did not. A posttest-only control group design was employed, and results…

  20. Achieving All Our Ambitions

    ERIC Educational Resources Information Center

    Hartley, Tricia

    2009-01-01

    National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…

  1. Improving Educational Achievement.

    ERIC Educational Resources Information Center

    New York University Education Quarterly, 1979

    1979-01-01

    This is a slightly abridged version of the report of the National Academy of Education panel, convened at the request of HEW Secretary Joseph Califano and Assistant Secretary for Education Mary F. Berry, to study recent declines in student achievement and methods of educational improvement. (SJL)

  2. The Achievement Club

    ERIC Educational Resources Information Center

    Rogers, Ibram

    2009-01-01

    When Gabrielle Carpenter became a guidance counselor in Northern Virginia nine years ago, she focused on the academic achievement gap and furiously tried to close it. At first, she was compelled by tremendous professional interest. However, after seeing her son lose his zeal for school, Carpenter joined forces with other parents to form an…

  3. Achievement in Problem Solving

    ERIC Educational Resources Information Center

    Friebele, David

    2010-01-01

    This Action Research Project is meant to investigate the effects of incorporating research-based instructional strategies into instruction and their subsequent effect on student achievement in the area of problem-solving. The two specific strategies utilized are the integration of manipulatives and increased social interaction on a regular basis.…

  4. Essays on Educational Achievement

    ERIC Educational Resources Information Center

    Ampaabeng, Samuel Kofi

    2013-01-01

    This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…

  5. Advancing Student Achievement

    ERIC Educational Resources Information Center

    Walberg, Herbert J.

    2010-01-01

    For the last half century, higher spending and many modern reforms have failed to raise the achievement of students in the United States to the levels of other economically advanced countries. A possible explanation, says Herbert Walberg, is that much current education theory is ill informed about scientific psychology, often drawing on fads and…

  6. NCLB: Achievement Robin Hood?

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the…

  7. Evaluating arguments during instigations of defence motivation and accuracy motivation.

    PubMed

    Liu, Cheng-Hong

    2017-05-01

    When people evaluate the strength of an argument, their motivations are likely to influence the evaluation. However, few studies have specifically investigated the influences of motivational factors on argument evaluation. This study examined the effects of defence and accuracy motivations on argument evaluation. According to the compatibility between the advocated positions of arguments and participants' prior beliefs and the objective strength of arguments, participants evaluated four types of arguments: compatible-strong, compatible-weak, incompatible-strong, and incompatible-weak arguments. Experiment 1 revealed that participants possessing a high defence motivation rated compatible-weak arguments as stronger and incompatible-strong ones as weaker than participants possessing a low defence motivation. However, the strength ratings between the high and low defence groups regarding both compatible-strong and incompatible-weak arguments were similar. Experiment 2 revealed that when participants possessed a high accuracy motivation, they rated compatible-weak arguments as weaker and incompatible-strong ones as stronger than when they possessed a low accuracy motivation. However, participants' ratings on both compatible-strong and incompatible-weak arguments were similar when comparing high and low accuracy conditions. The results suggest that defence and accuracy motivations are two major motives influencing argument evaluation. However, they primarily influence the evaluation results for compatible-weak and incompatible-strong arguments, but not for compatible-strong and incompatible-weak arguments.

  8. Quantifying Differences and Similarities in Whole-Brain White Matter Architecture Using Local Connectome Fingerprints

    PubMed Central

    Singh, Aarti; Poczos, Barnabas; Erickson, Kirk I.; Tseng, Wen-Yih I.; Verstynen, Timothy D.

    2016-01-01

    Quantifying differences or similarities in connectomes has been a challenge due to the immense complexity of global brain networks. Here we introduce a noninvasive method that uses diffusion MRI to characterize whole-brain white matter architecture as a single local connectome fingerprint that allows for a direct comparison between structural connectomes. In four independently acquired data sets with repeated scans (total N = 213), we show that the local connectome fingerprint is highly specific to an individual, allowing for an accurate self-versus-others classification that achieved 100% accuracy across 17,398 identification tests. The estimated classification error was approximately one thousand times smaller than fingerprints derived from diffusivity-based measures or region-to-region connectivity patterns for repeat scans acquired within 3 months. The local connectome fingerprint also revealed neuroplasticity within an individual reflected as a decreasing trend in self-similarity across time, whereas this change was not observed in the diffusivity measures. Moreover, the local connectome fingerprint can be used as a phenotypic marker, revealing 12.51% similarity between monozygotic twins, 5.14% between dizygotic twins, and 4.51% between none-twin siblings, relative to differences between unrelated subjects. This novel approach opens a new door for probing the influence of pathological, genetic, social, or environmental factors on the unique configuration of the human connectome. PMID:27846212

  9. Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.

    PubMed

    Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan

    2015-11-01

    Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data.

  10. A Guaranteed Similarity Metric Learning Framework for Biological Sequence Comparison.

    PubMed

    Hua, Keru; Yu, Qin; Zhang, Ruiming

    2016-01-01

    Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. The distance and similarity between two sequence are very important and widely studied. During the last decades, Similarity(distance) metric learning is one of the hottest topics of machine learning/data mining as well as their applications in the bioinformatics field. It is feasible to introduce machine learning technology to learn similarity metric from biological data. In this paper, we propose a novel framework of guaranteed similarity metric learning (GMSL) to perform alignment of biology sequences in any feature vector space. It introduces the (ϵ, γ, τ)-goodness similarity theory to Mahalanobis metric learning. As a theoretical guaranteed similarity metric learning approach, GMSL guarantees that the learned similarity function performs well in classification and clustering. Our experiments on the most used datasets demonstrate that our approach outperforms the state-of-the-art biological sequences alignment methods and other similarity metric learning algorithms in both accuracy and stability.

  11. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  12. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  13. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  14. Solving the stability-accuracy-diversity dilemma of recommender systems

    NASA Astrophysics Data System (ADS)

    Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong

    2017-02-01

    Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.

  15. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  16. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  17. Some aspects of achieving an ultimate accuracy during insertion device magnetic measurements by a Hall probe.

    PubMed

    Vasserman, I B; Strelnikov, N O; Xu, J Z

    2013-02-01

    An extensive test of a new Senis 2-axis Hall probe was done at the Advanced Photon Source using the Undulator A device and calibration system. This new probe has clear advantages compared with previously used Bell and Sentron Hall probes: very stable zero offset (less than the noise of 0.026 G) and compensated planar Hall effect. It can be used with proper calibration even for first and second field integral measurements. A comparison with reference measurements by long stretched coil shows that the difference in the first field integral measurement results for a 2.4-m-long Undulator A device is between 17 G cm for the best of four Hall probes used for the test and 51 G cm for the worst of them for all gap ranges from 10.5 mm to 150 mm.

  18. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  19. Faculty achievement tracking tool.

    PubMed

    Pettus, Sarah; Reifschneider, Ellen; Burruss, Nancy

    2009-03-01

    Faculty development and scholarship is an expectation of nurse educators. Accrediting institutions, such as the Commission on Collegiate Nursing Education, the National League for Nursing Accrediting Commission, and the Higher Learning Commission, all have criteria regarding faculty achievement. A faculty achievement tracking tool (FATT) was developed to facilitate documentation of accreditation criteria attainment. Based on criteria from accrediting organizations, the roles that are addressed include scholarship, service, and practice. Definitions and benchmarks for the faculty as an aggregate are included. Undergoing reviews from different accrediting organizations, the FATT has been used once for accreditation of the undergraduate program and once for accreditation of the graduate program. The FATT is easy to use and has become an excellent adjunct for the preparation for accreditation reports. In addition, the FATT may be used for yearly evaluations, advancement, and merit.

  20. Project ACHIEVE final report

    SciTech Connect

    1997-06-13

    Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.

  1. Advancing the Accuracy of Protein Fold Recognition by Utilizing Profiles From Hidden Markov Models.

    PubMed

    Lyons, James; Dehzangi, Abdollah; Heffernan, Rhys; Yang, Yuedong; Zhou, Yaoqi; Sharma, Alok; Paliwal, Kuldip

    2015-10-01

    Protein fold recognition is an important step towards solving protein function and tertiary structure prediction problems. Among a wide range of approaches proposed to solve this problem, pattern recognition based techniques have achieved the best results. The most effective pattern recognition-based techniques for solving this problem have been based on extracting evolutionary-based features. Most studies have relied on the Position Specific Scoring Matrix (PSSM) to extract these features. However it is known that profile-profile sequence alignment techniques can identify more remote homologs than sequence-profile approaches like PSIBLAST. In this study we use a profile-profile sequence alignment technique, namely HHblits, to extract HMM profiles. We will show that unlike previous studies, using the HMM profile to extract evolutionary information can significantly enhance the protein fold prediction accuracy. We develop a new pattern recognition based system called HMMFold which extracts HMM based evolutionary information and captures remote homology information better than previous studies. Using HMMFold we achieve up to 93.8% and 86.0% prediction accuracies when the sequential similarity rates are less than 40% and 25%, respectively. These results are up to 10% better than previously reported results for this task. Our results show significant enhancement especially for benchmarks with sequential similarity as low as 25% which highlights the effectiveness of HMMFold to address this problem and its superiority over previously proposed approaches found in the literature. The HMMFold is available online at: http://sparks-lab.org/pmwiki/download/index.php?Download =HMMFold.tar.bz2.

  2. Impact of CCD camera SNR on polarimetric accuracy.

    PubMed

    Chen, Zhenyue; Wang, Xia; Pacheco, Shaun; Liang, Rongguang

    2014-11-10

    A comprehensive charge-coupled device (CCD) camera noise model is employed to study the impact of CCD camera signal-to-noise ratio (SNR) on polarimetric accuracy. The study shows that the standard deviations of the measured degree of linear polarization (DoLP) and angle of linear polarization (AoLP) are mainly dependent on the camera SNR. With increase in the camera SNR, both the measurement errors and the standard deviations caused by the CCD camera noise decrease. When the DoLP of the incident light is smaller than 0.1, the camera SNR should be at least 75 to achieve a measurement error of less than 0.01. When the input DoLP is larger than 0.5, a SNR of 15 is sufficient to achieve the same measurement accuracy. An experiment is carried out to verify the simulation results.

  3. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  4. Bioturbo similarity searching: combining chemical and biological similarity to discover structurally diverse bioactive molecules.

    PubMed

    Wassermann, Anne Mai; Lounkine, Eugen; Glick, Meir

    2013-03-25

    Virtual screening using bioactivity profiles has become an integral part of currently applied hit finding methods in pharmaceutical industry. However, a significant drawback of this approach is that it is only applicable to compounds that have been biologically tested in the past and have sufficient activity annotations for meaningful profile comparisons. Although bioactivity data generated in pharmaceutical institutions are growing on an unprecedented scale, the number of biologically annotated compounds still covers only a minuscule fraction of chemical space. For a newly synthesized compound or an isolated natural product to be biologically characterized across multiple assays, it may take a considerable amount of time. Consequently, this chemical matter will not be included in virtual screening campaigns based on bioactivity profiles. To overcome this problem, we herein introduce bioturbo similarity searching that uses chemical similarity to map molecules without biological annotations into bioactivity space and then searches for biologically similar compounds in this reference system. In benchmark calculations on primary screening data, we demonstrate that our approach generally achieves higher hit rates and identifies structurally more diverse compounds than approaches using chemical information only. Furthermore, our method is able to discover hits with novel modes of inhibition that traditional 2D and 3D similarity approaches are unlikely to discover. Test calculations on a set of natural products reveal the practical utility of the approach for identifying novel and synthetically more accessible chemical matter.

  5. Empathic accuracy for happiness in the daily lives of older couples: Fluid cognitive performance predicts pattern accuracy among men.

    PubMed

    Hülür, Gizem; Hoppmann, Christiane A; Rauers, Antje; Schade, Hannah; Ram, Nilam; Gerstorf, Denis

    2016-08-01

    Correctly identifying other's emotional states is a central cognitive component of empathy. We examined the role of fluid cognitive performance for empathic accuracy for happiness in the daily lives of 86 older couples (mean relationship length = 45 years; mean age = 75 years) on up to 42 occasions over 7 consecutive days. Men performing better on the Digit Symbol test were more accurate in identifying ups and downs of their partner's happiness. A similar association was not found for women. We discuss the potential role of fluid cognitive performance and other individual, partner, and situation characteristics for empathic accuracy. (PsycINFO Database Record

  6. Accuracy potentials for large space antenna reflectors with passive structure

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1982-01-01

    Analytical results indicate that a careful selection of materials and truss design, combined with accurate manufacturing techniques, can result in very accurate surfaces for large space antennas. The purpose of this paper is to examine these relationships for various types of structural configurations. Comparisons are made of the accuracy achievable by truss- and dome-type structures for a wide range of diameter and focal length of the antenna and wavelength of the radiated signal.

  7. Determination of elemental composition of volatile organic compounds from Chinese rose oil by spectral accuracy and mass accuracy.

    PubMed

    Zhou, Wei; Zhang, Yaheng; Xu, Hongliang; Gu, Ming

    2011-10-30

    Elemental composition determination of volatile organic compounds through high mass accuracy and isotope pattern matching could not be routinely achieved with a unit-mass resolution mass spectrometer until the recent development of the comprehensive instrument line-shape calibration technology. Through this unique technology, both m/z values and mass spectral peak shapes are calibrated simultaneously. Of fundamental importance is that calibrated mass spectra have symmetric and mathematically known peak shapes, which makes it possible to deconvolute overlapped monoisotopes and their (13)C-isotope peaks and achieve accurate mass measurements. The key experimental requirements for the measurements are to acquire true raw data in a profile or continuum mode with the acquisition threshold set to zero. A total of 13 ions from Chinese rose oil were analyzed with internal calibration. Most of the ions produced high mass accuracy of better than 5 mDa and high spectral accuracy of better than 99%. These results allow five tested ions to be identified with unique elemental compositions and the other eight ions to be determined as a top match from multiple candidates based on spectral accuracy. One of them, a coeluted component (Nerol) with m/z 154, could not be identified by conventional GC/MS (gas chromatography/mass spectrometry) and library search. Such effective determination for elemental compositions of the volatile organic compounds with a unit-mass resolution quadrupole system is obviously attributed to the significant improvement of mass accuracy. More importantly, high spectral accuracy available through the instrument line-shape calibration enables highly accurate isotope pattern recognition for unknown identification.

  8. Martial arts striking hand peak acceleration, accuracy and consistency.

    PubMed

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  9. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  10. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  11. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  12. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  13. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  14. [Accuracy of HDL cholesterol measurements].

    PubMed

    Niedmann, P D; Luthe, H; Wieland, H; Schaper, G; Seidel, D

    1983-02-01

    The widespread use of different methods for the determination of HDL-cholesterol (in Europe: sodium phosphotungstic acid/MgCl2) in connection with enzymatic procedures (in the USA: heparin/MnCl2 followed by the Liebermann-Burchard method) but common reference values makes it necessary to evaluate not only accuracy, specificity, and precision of the precipitation step but also of the subsequent cholesterol determination. A high ratio of serum vs. concentrated precipitation reagent (10:1 V/V) leads to the formation of variable amounts of delta-3.5-cholestadiene. This substance is not recognized by cholesterol oxidase but leads to an 1.6 times overestimation by the Liebermann-Burchard method. Therefore, errors in HDL-cholesterol determination should be considered and differences up to 30% may occur between HDL-cholesterol values determined by the different techniques (heparin/MnCl2 - Liebermann-Burchard and NaPW/MgCl2-CHOD-PAP).

  15. Comparison of Explicitly Correlated Methods for Computing High-Accuracy Benchmark Energies for Noncovalent Interactions.

    PubMed

    Sirianni, Dominic A; Burns, Lori A; Sherrill, C David

    2017-01-10

    The reliability of explicitly correlated methods for providing benchmark-quality noncovalent interaction energies was tested at various levels of theory and compared to estimates of the complete basis set (CBS) limit. For all systems of the A24 test set, computations were performed using both aug-cc-pVXZ (aXZ; X = D, T, Q, 5) basis sets and specialized cc-pVXZ-F12 (XZ-F12; X = D, T, Q, 5) basis sets paired with explicitly correlated coupled cluster singles and doubles [CCSD-F12n (n = a, b, c)] with triple excitations treated by the canonical perturbative method and scaled to compensate for their lack of explicit correlation [(T**)]. Results show that aXZ basis sets produce smaller errors versus the CBS limit than XZ-F12 basis sets. The F12b ansatz results in the lowest average errors for aTZ and larger basis sets, while F12a is best for double-ζ basis sets. When using aXZ basis sets (X ≥ 3), convergence is achieved from above for F12b and F12c ansatzë and from below for F12a. The CCSD(T**)-F12b/aXZ approach converges quicker with respect to basis than any other combination, although the performance of CCSD(T**)-F12c/aXZ is very similar. Both CCSD(T**)-F12b/aTZ and focal point schemes employing density-fitted, frozen natural orbital [DF-FNO] CCSD(T)/aTZ exhibit similar accuracy and computational cost, and both are much more computationally efficient than large-basis conventional CCSD(T) computations of similar accuracy.

  16. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  17. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  18. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  19. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  20. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  1. Achieving closure at Fernald

    SciTech Connect

    Bradburne, John; Patton, Tisha C.

    2001-02-25

    When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.

  2. Increasing accuracy and precision of digital image correlation through pattern optimization

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Hochhalter, J. D.; Ruggles, T. J.; Cannon, A. H.

    2017-04-01

    The accuracy and precision of digital image correlation (DIC) is based on three primary components: image acquisition, image analysis, and the subject of the image. Focus on the third component, the image subject, has been relatively limited and primarily concerned with comparing pseudo-random surface patterns. In the current work, a strategy is proposed for the creation of optimal DIC patterns. In this strategy, a pattern quality metric is developed as a combination of quality metrics from the literature rather than optimization based on any single one of them. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. Specifically, sum of square of subset intensity gradients (SSSIG) was found to be the metric most strongly correlated to DIC accuracy and thus is the main component of the newly proposed pattern quality metric. A term related to the secondary auto-correlation peak height is also part of the proposed quality metric which effectively acts as a constraint upon SSSIG ensuring that a regular (e.g., checkerboard-type) pattern is not achieved. The combined pattern quality metric is used to generate a pattern that was on average 11.6% more accurate than a randomly generated pattern in a suite of numerical experiments. Furthermore, physical experiments were performed which confirm that there is indeed improvement of a similar magnitude in DIC measurements for the optimized pattern compared to a random pattern.

  3. Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures

    ERIC Educational Resources Information Center

    Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John

    2013-01-01

    The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…

  4. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy.

  5. DDE Transposases: Structural Similarity and Diversity

    PubMed Central

    Nesmelova, Irina V.; Hackett, Perry B.

    2010-01-01

    DNA transposons are mobile DNA elements that can move from one DNA molecule to another and thereby deliver genetic information into human chromosomes in order to confer a new function or replace a defective gene. This process requires a transposase enzyme. During transposition DD[E/D]-transposases undergo a series of conformational changes. We summarize the structural features of DD[E/D]-transposases for which three-dimensional structures are available and that relate to transposases, which are being developed for use in mammalian cells. Similar to other members of the polynucleotidyl transferase family, the catalytic domains of DD[E/D]-transposases share a common feature: an RNase H-like fold that draws three catalytically active residues, the DDE motif, into close proximity. Beyond this fold, the structures of catalytic domains vary considerably, and the DD[E/D]-transposases display marked structural diversity within their DNA-binding domains. Yet despite such structural variability, essentially the same end result is achieved. PMID:20615441

  6. Effect of transportation and storage using sorbent tubes of exhaled breath samples on diagnostic accuracy of electronic nose analysis.

    PubMed

    van der Schee, M P; Fens, N; Brinkman, P; Bos, L D J; Angelo, M D; Nijsen, T M E; Raabe, R; Knobel, H H; Vink, T J; Sterk, P J

    2013-03-01

    Many (multi-centre) breath-analysis studies require transport and storage of samples. We aimed to test the effect of transportation and storage using sorbent tubes of exhaled breath samples for diagnostic accuracy of eNose and GC-MS analysis. As a reference standard for diagnostic accuracy, breath samples of asthmatic patients and healthy controls were analysed by three eNose devices. Samples were analysed by GC-MS and eNose after 1, 7 and 14 days of transportation and storage using sorbent tubes. The diagnostic accuracy for eNose and GC-MS after storage was compared to the reference standard. As a validation, the stability was assessed of 15 compounds known to be related to asthma, abundant in breath or related to sampling and analysis. The reference test discriminated asthma and healthy controls with a median AUC (range) of 0.77 (0.72-0.76). Similar accuracies were achieved at t1 (AUC eNose 0.78; GC-MS 0.84), t7 (AUC eNose 0.76; GC-MS 0.79) and t14 (AUC eNose 0.83; GC-MS 0.84). The GC-MS analysis of compounds showed an adequate stability for all 15 compounds during the 14 day period. Short-term transportation and storage using sorbent tubes of breath samples does not influence the diagnostic accuracy for discrimination between asthma and health by eNose and GC-MS.

  7. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  8. Achievement Goals and Achievement Emotions: A Meta-Analysis

    ERIC Educational Resources Information Center

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  9. Influence of Mothers' Education on Children's Maths Achievement in Kenya

    ERIC Educational Resources Information Center

    Abuya, Benta A.; Oketch, Moses; Mutisya, Maurice; Ngware, Moses; Ciera, James

    2013-01-01

    Research shows that fathers' level of education predicts achievement of both boys and girls, with significantly greater effect for boys. Similarly, mothers' level of education predicts the achievement of girls but not boys. This study tests the mother-child education achievement hypothesis, by examining the effect of mothers' education on the…

  10. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  11. Melody Alignment and Similarity Metric for Content-Based Music Retrieval

    NASA Astrophysics Data System (ADS)

    Zhu, Yongwei; Kankanhalli, Mohan S.

    2003-01-01

    Music query-by-humming has attracted much research interest recently. It is a challenging problem since the hummed query inevitably contains much variation and inaccuracy. Furthermore, the similarity computation between the query tune and the reference melody is not easy due to the difficulty in ensuring proper alignment. This is because the query tune can be rendered at an unknown speed and it is usually an arbitrary subsequence of the target reference melody. Many of the previous methods, which adopt note segmentation and string matching, suffer drastically from the errors in the note segmentation, which affects retrieval accuracy and efficiency. Some methods solve the alignment issue by controlling the speed of the articulation of queries, which is inconvenient because it forces users to hum along a metronome. Some other techniques introduce arbitrary rescaling in time but this is computationally very inefficient. In this paper, we introduce a melody alignment technique, which addresses the robustness and efficiency issues. We also present a new melody similarity metric, which is performed directly on melody contours of the query data. This approach cleanly separates the alignment and similarity measurement in the search process. We show how to robustly and efficiently align the query melody with the reference melodies and how to measure the similarity subsequently. We have carried out extensive experiments. Our melody alignment method can reduce the matching candidate to 1.7% with 95% correct alignment rate. The overall retrieval system achieved 80% recall in the top 10 rank list. The results demonstrate the robustness and effectiveness the proposed methods.

  12. Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu

    2015-04-01

    Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the

  13. Conceptual similarity effects on working memory in sentence contexts: testing a theory of anaphora.

    PubMed

    Cowles, H Wind; Garnham, Alan; Simner, Julia

    2010-06-01

    The degree of semantic similarity between an anaphoric noun phrase (e.g., the bird) and its antecedent (e.g., a robin) is known to affect the anaphor resolution process, but the mechanisms that underlie this effect are not known. One proposal (Almor, 1999) is that semantic similarity triggers interference effects in working memory and makes two crucial assumptions: First, semantic similarity impairs working memory just as phonological similarity does (e.g., Baddeley, 1992), and, second, this impairment interferes with processes of sentence comprehension. We tested these assumptions in two experiments that compared recall accuracy between phonologically similar, semantically similar, and control words in sentence contexts. Our results do not provide support for Almor's claims: Phonological overlap decreased recall accuracy in sentence contexts, but semantic similarity did not. These results shed doubt on the idea that semantic interference in working memory is an underlying mechanism in anaphor resolution.

  14. Spacecraft attitude determination accuracy from mission experience

    NASA Astrophysics Data System (ADS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-10-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  15. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  16. Three-dimensional accuracy of different impression techniques for dental implants

    PubMed Central

    Nakhaei, Mohammadreza; Madani, Azam S; Moraditalab, Azizollah; Haghi, Hamidreza Rajati

    2015-01-01

    Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO), Straumann tissue-level (STL), and Straumann bone-level (SBL). Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model), transfer coping (TC) (BIO model) and TC plus plastic cap (TC-Cap) (SBL model). The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes), and the linear displacement (ΔR) were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05). Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001). No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively). Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique. PMID:26604956

  17. Investigation of psychophysical similarity measures for selection of similar images in the diagnosis of clustered microcalcifications on mammograms.

    PubMed

    Muramatsu, Chisako; Li, Qiang; Schmidt, Robert; Shiraishi, Junji; Doi, Kunio

    2008-12-01

    The presentation of images with lesions of known pathology that are similar to an unknown lesion may be helpful to radiologists in the diagnosis of challenging cases for improving the diagnostic accuracy and also for reducing variation among different radiologists. The authors have been developing a computerized scheme for automatically selecting similar images with clustered microcalcifications on mammograms from a large database. For similar images to be useful, they must be similar from the point of view of the diagnosing radiologists. In order to select such images, subjective similarity ratings were obtained for a number of pairs of clustered microcalcifications by breast radiologists for establishment of a "gold standard" of image similarity, and the gold standard was employed for determination and evaluation of the selection of similar images. The images used in this study were obtained from the Digital Database for Screening Mammography developed by the University of South Florida. The subjective similarity ratings for 300 pairs of images with clustered microcalcifications were determined by ten breast radiologists. The authors determined a number of image features which represent the characteristics of clustered microcalcifications that radiologists would use in their diagnosis. For determination of objective similarity measures, an artificial neural network (ANN) was employed. The ANN was trained with the average subjective similarity ratings as teacher and selected image features as input data. The ANN was trained to learn the relationship between the image features and the radiologists' similarity ratings; therefore, once the training was completed, the ANN was able to determine the similarity, called a psychophysical similarity measure, which was expected to be close to radiologists' impressions, for an unknown pair of clustered microcalcifications. By use of a leave-one-out test method, the best combination of features was selected. The correlation

  18. Investigation of psychophysical similarity measures for selection of similar images in the diagnosis of clustered microcalcifications on mammograms

    SciTech Connect

    Muramatsu, Chisako; Li Qiang; Schmidt, Robert; Shiraishi, Junji; Doi, Kunio

    2008-12-15

    The presentation of images with lesions of known pathology that are similar to an unknown lesion may be helpful to radiologists in the diagnosis of challenging cases for improving the diagnostic accuracy and also for reducing variation among different radiologists. The authors have been developing a computerized scheme for automatically selecting similar images with clustered microcalcifications on mammograms from a large database. For similar images to be useful, they must be similar from the point of view of the diagnosing radiologists. In order to select such images, subjective similarity ratings were obtained for a number of pairs of clustered microcalcifications by breast radiologists for establishment of a ''gold standard'' of image similarity, and the gold standard was employed for determination and evaluation of the selection of similar images. The images used in this study were obtained from the Digital Database for Screening Mammography developed by the University of South Florida. The subjective similarity ratings for 300 pairs of images with clustered microcalcifications were determined by ten breast radiologists. The authors determined a number of image features which represent the characteristics of clustered microcalcifications that radiologists would use in their diagnosis. For determination of objective similarity measures, an artificial neural network (ANN) was employed. The ANN was trained with the average subjective similarity ratings as teacher and selected image features as input data. The ANN was trained to learn the relationship between the image features and the radiologists' similarity ratings; therefore, once the training was completed, the ANN was able to determine the similarity, called a psychophysical similarity measure, which was expected to be close to radiologists' impressions, for an unknown pair of clustered microcalcifications. By use of a leave-one-out test method, the best combination of features was selected. The correlation

  19. An Experimental Study on the Iso-Content-Based Angle Similarity Measure.

    ERIC Educational Resources Information Center

    Zhang, Jin; Rasmussen, Edie M.

    2002-01-01

    Retrieval performance of the iso-content-based angle similarity measure within the angle, distance, conjunction, disjunction, and ellipse retrieval models is compared with retrieval performance of the distance similarity measure and the angle similarity measure. Results show the iso-content-based angle similarity measure achieves satisfactory…

  20. Perceptual similarity affects the learning curve (but not necessarily learning).

    PubMed

    Wifall, Tim; McMurray, Bob; Hazeltine, Eliot

    2014-02-01

    What role does item similarity play in motor skill acquisition? To examine this question, we used a modified version of the chord learning task (Seibel, 1963) that entails producing simultaneous finger key presses, similar to playing a chord on a piano. In Experiment 1, difficulty, as indexed by response time (RT) to a particular chord on the first session, was held constant, and chords that were similar to other chords had longer RTs after practice than dissimilar chords. In Experiment 2, we used chords that produced different initial RTs to show that similarity affected asymptotic RT rather than the size of RT decrement achieved with practice. In Experiment 3, we eliminated differences in perceptual similarity by using Chinese characters for stimuli while retaining differences in motoric similarity, which resulted in nearly identical asymptotes for similar and dissimilar chords. Thus, the density effect observed in Experiments 1 and 2 appears to stem from competition triggered by similar stimuli. Because performance differences were immediately re-established when stimulus similarity was introduced in Experiment 3 during transfer sessions, competition appears to emerge among learned, central representations that can be coactivated by multiple stimuli.

  1. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a…

  2. SSL: Signal Similarity-Based Localization for Ocean Sensor Networks

    PubMed Central

    Chen, Pengpeng; Ma, Honglu; Gao, Shouwan; Huang, Yan

    2015-01-01

    Nowadays, wireless sensor networks are often deployed on the sea surface for ocean scientific monitoring. One of the important challenges is to localize the nodes’ positions. Existing localization schemes can be roughly divided into two types: range-based and range-free. The range-based localization approaches heavily depend on extra hardware capabilities, while range-free ones often suffer from poor accuracy and low scalability, far from the practical ocean monitoring applications. In response to the above limitations, this paper proposes a novel signal similarity-based localization (SSL) technology, which localizes the nodes’ positions by fully utilizing the similarity of received signal strength and the open-air characteristics of the sea surface. In the localization process, we first estimate the relative distance between neighboring nodes through comparing the similarity of received signal strength and then calculate the relative distance for non-neighboring nodes with the shortest path algorithm. After that, the nodes’ relative relation map of the whole network can be obtained. Given at least three anchors, the physical locations of nodes can be finally determined based on the multi-dimensional scaling (MDS) technology. The design is evaluated by two types of ocean experiments: a zonal network and a non-regular network using 28 nodes. Results show that the proposed design improves the localization accuracy compared to typical connectivity-based approaches and also confirm its effectiveness for large-scale ocean sensor networks. PMID:26610520

  3. Sensitivity to Phonological Similarity Within and Across Languages

    PubMed Central

    Blumenfeld, Henrike K.; Boukrina, Olga V.

    2009-01-01

    The influence of phonological similarity on bilingual language processing was examined within and across languages in three experiments. Phonological similarity was manipulated within a language by varying neighborhood density, and across languages by varying extent of cross-linguistic overlap between native and non-native languages. In Experiment 1, speed and accuracy of bilinguals’ picture naming were susceptible to phonological neighborhood density in both the first and the second language. In Experiment 2, eye-movement patterns indicated that the time-course of language activation varied across phonological neighborhood densities and across native/non-native language status. In Experiment 3, speed and accuracy of bilingual performance in an auditory lexical decision task were influenced by degree of cross-linguistic phonological overlap. Together, the three experiments confirm that bilinguals are sensitive to phonological similarity within and across languages and suggest that this sensitivity is asymmetrical across native and non-native languages and varies along the timecourse of word processing. PMID:18041587

  4. Beyond Literal Similarity. Technical Report No. 105.

    ERIC Educational Resources Information Center

    Ortony, Andrew

    Hitherto, theories of similarity have restricted themselves to judgments of what might be called literal similarity. A central thesis of this paper is that a complete account of similarity needs also to be sensitive to nonliteralness, or metaphoricity, an aspect of similarity statements that is most evident in similes, but that actually underlies…

  5. Does aging impair first impression accuracy? Differentiating emotion recognition from complex social inferences.

    PubMed

    Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini

    2014-09-01

    Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging.

  6. Comparison of similarity measures for rigid-body CT/Dual X-ray image registrations.

    PubMed

    Kim, Jinkoo; Li, Shidong; Pradhan, Deepak; Hammoud, Rabih; Chen, Qing; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho; Movsas, Benjamin

    2007-08-01

    A set of experiments were conducted to evaluate six similarity measures for intensity-based rigid-body 3D/2D image registration. Similarity measure is an index that measures the similarity between a digitally reconstructed radiograph (DRR) and an x-ray planar image. The registration is accomplished by maximizing the sum of the similarity measures between biplane x-ray images and the corresponding DRRs in an iterative fashion. We have evaluated the accuracy and attraction ranges of the registrations using six different similarity measures on phantom experiments for head, thorax, and pelvis. The images were acquired using Varian Medial System On-Board Imager. Our results indicated that normalized cross correlation and entropy of difference showed a wide attraction range (62 deg and 83 mm mean attraction range, omega(mean)), but the worst accuracy (4.2 mm maximum error, e(max)). The gradient-based similarity measures, gradient correlation and gradient difference, and the pattern intensity showed sub-millimeter accuracy, but narrow attraction ranges (omega(mean)=29 deg, 31 mm). Mutual information was in-between of these two groups (e(max)=2.5 mm, omega(mean)= 48 deg, 52 mm). On the data of 120 x-ray pairs from eight IRB approved prostate patients, the gradient difference showed the best accuracy. In the clinical applications, registrations starting with the mutual information followed by the gradient difference may provide the best accuracy and the most robustness.

  7. On the geolocation accuracy of COSMO-SkyMed products

    NASA Astrophysics Data System (ADS)

    Nitti, Davide O.; Nutricato, Raffaele; Lorusso, Rino; Lombardi, Nunzia; Bovenga, Fabio; Bruno, Maria F.; Chiaradia, Maria T.; Milillo, Giovanni

    2015-10-01

    Accurate geolocation of SAR data is nowadays strongly required because of the increasing number of high resolution SAR sensors available as for instance from TerraSAR-X / TanDEM-X and COSMO-SkyMed space-borne missions. Both stripmap and spotlight acquisition modes provide from metric to sub metric spatial resolution which demands the ability to ensure a geolocation accuracy of the same order of magnitude. Geocoding quality depends on several factors and in particular on the knowledge of the actual values of the satellite position along the orbit, and the delay introduced by the additional path induced by changes in the refractivity index due to the presence of the atmosphere (the so called Atmospheric Path Delay or APD). No definitive results are reported yet in the scientific literature, concerning the best performances achievable by the COSMO-SkyMed constellation in terms of geolocation accuracy. Preliminary studies have shown that sub-pixel geolocation accuracies are hardly achievable with COSMO-SkyMed data. The present work aims at inspecting the origin of the geolocation error sources in COSMO-SkyMed Single-look Complex Slant (SCS) products, and to investigate possible strategies for their compensation or mitigation. Five different test sites have been selected in Italy and Argentina, where up to 30 corner reflectors are installed, pointing towards ascending or descending passes. Experimental results are presented and discussed.

  8. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  9. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    PubMed

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html.

  10. A Comparison of Accelerometer Accuracy in Older Adults.

    PubMed

    Phillips, Lorraine J; Petroski, Gregory F; Markis, Natalie E

    2015-01-01

    Older adults' gait disorders present challenges for accurate activity monitoring. The current study compared the accuracy of accelerometer-detected to hand-tallied steps in 50 residential care/assisted living residents. Participants completed two walking trials wearing a Fitbit® Tracker and waist-, wrist-, and ankle-mounted Actigraph GT1M. Agreement between accelerometer and observed counts was calculated using concordance correlation coefficients (CCC), accelerometer to observed count ratios, accelerometer and observed count differences, and Bland-Altman plots. Classification and Regression Tree analysis identified minimum gait speed thresholds to achieve accelerometer accuracy ≥0.80. Participants' mean age was 84.2 and gait speed was 0.64 m/s. All accelerometers underestimated true steps. Only the ankle-mounted GT1M demonstrated positive agreement with observed counts (CCC = 0.205). Thresholds for 0.80 accuracy were gait speeds ≥0.56 m/s for the Fitbit and gait speeds ≥0.71 m/s for the ankle-mounted GT1M. Gait speed and accelerometer placement affected activity monitor accuracy in older adults.

  11. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    SciTech Connect

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  12. Dosimetry and mechanical accuracy of the first rotating gamma system installed in North America.

    PubMed

    Kubo, Hideo D; Araki, Fujio

    2002-11-01

    The purpose of this paper is to present the dosimetry and mechanical accuracy of the first rotating gamma system (RGS) installed in North America for stereotactic radiosurgery. The data were obtained during the installation, acceptance test procedure, and commissioning of the unit. The RGS unit installed at UC Davis Cancer Center (RGSu) has modifications on the source and collimator bodies from the earlier version of the Chinese RGS (RGSc). The differences between these two RGSs are presented. The absolute dose at the focal point was measured in a 16-cm-diam acrylic phantom using a small volume chamber, which was calibrated at the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UW-ADCL). The dose in acrylic was then converted to a dose in water. A collimator output factor from each of the four different collimator sizes ranging from 4, 8, 14, and 18 mm was measured with (1) a smaller volume chamber and (2) approximately 3.0 mm x 3.0 mm x 1.0 mm TLD chips in the same acrylic phantom. The Gafchromic films were used for the dose profile, collimator output factor, and mechanical/radiation field isocentricity measurements. The TLD chips were processed in-house whereas Gafchromic films were processed both at the UW-ADCL and in-house. The timer error, timer accuracy, and timer linearity were also determined. The dose profiles were found to be similar between RGSc and RGSu. The 4 mm collimator output factor of the RGSu was approximately 0.6, similar to that from RGSc, in comparison to 0.8 in the report for a Leksell Model U Gamma-Knife. The mechanical/radiation field isocentricity for RGSc and RGSu is found to be similar and is within 0.3 mm in both X and Y directions. In the Z direction, the beam center of the RGSu is shifted toward the sources by 0.75 mm from the mechanical isocenter whereas no data are available for RGSc. Little dosimetric difference is found between RGSu and RGSc. It is reported that RGSc has the same dosimetric and mechanical

  13. Semantic similarity measure in biomedical domain leverage web search engine.

    PubMed

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  14. Improving structural similarity based virtual screening using background knowledge

    PubMed Central

    2013-01-01

    Background Virtual screening in the form of similarity rankings is often applied in the early drug discovery process to rank and prioritize compounds from a database. This similarity ranking can be achieved with structural similarity measures. However, their general nature can lead to insufficient performance in some application cases. In this paper, we provide a link between ranking-based virtual screening and fragment-based data mining methods. The inclusion of binding-relevant background knowledge into a structural similarity measure improves the quality of the similarity rankings. This background knowledge in the form of binding relevant substructures can either be derived by hand selection or by automated fragment-based data mining methods. Results In virtual screening experiments we show that our approach clearly improves enrichment factors with both applied variants of our approach: the extension of the structural similarity measure with background knowledge in the form of a hand-selected relevant substructure or the extension of the similarity measure with background knowledge derived with data mining methods. Conclusion Our study shows that adding binding relevant background knowledge can lead to significantly improved similarity rankings in virtual screening and that even basic data mining approaches can lead to competitive results making hand-selection of the background knowledge less crucial. This is especially important in drug discovery and development projects where no receptor structure is available or more frequently no verified binding mode is known and mostly ligand based approaches can be applied to generate hit compounds. PMID:24341870

  15. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  16. [Upon scientific accuracy scheme at clinical specialties].

    PubMed

    Ortega Calvo, M

    2006-11-01

    Will be medical specialties like sciences in the future? Yes, progressively they will. Accuracy in clinical specialties will be dissimilar in the future because formal-logic mathematics, quantum physics advances and relativity theory utilities. Evidence based medicine is now helping to clinical specialties on scientific accuracy by the way of decision theory.

  17. Sound source localization identification accuracy: bandwidth dependencies.

    PubMed

    Yost, William A; Zhong, Xuan

    2014-11-01

    Sound source localization accuracy using a sound source identification task was measured in the front, right quarter of the azimuth plane as rms (root-mean-square) error (degrees) for stimulus conditions in which the bandwidth (1/20 to 2 octaves wide) and center frequency (250, 2000, 4000 Hz) of 200-ms noise bursts were varied. Tones of different frequencies (250, 2000, 4000 Hz) were also used. As stimulus bandwidth increases, there is an increase in sound source localization identification accuracy (i.e., rms error decreases). Wideband stimuli (>1 octave wide) produce best sound source localization accuracy (~6°-7° rms error), and localization accuracy for these wideband noise stimuli does not depend on center frequency. For narrow bandwidths (<1 octave) and tonal stimuli, accuracy does depend on center frequency such that highest accuracy is obtained for low-frequency stimuli (centered on 250 Hz), worse accuracy for mid-frequency stimuli (centered on 2000 Hz), and intermediate accuracy for high-frequency stimuli (centered on 4000 Hz).

  18. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  19. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  20. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  1. Evaluating the accuracy of selenodesic reference grids

    NASA Technical Reports Server (NTRS)

    Koptev, A. A.

    1974-01-01

    Estimates were made of the accuracy of reference point grids using the technique of calculating the errors from theoretical analysis. Factors taken into consideration were: telescope accuracy, number of photographs, and libration amplitude. To solve the problem, formulas were used for the relationship between the coordinates of lunar surface points and their images on the photograph.

  2. Accuracy of multi-look geo-coding

    NASA Astrophysics Data System (ADS)

    Weidaw, E. M.; Roth, M. W.; Brown, M. Z.; Scheck, A. E.

    2010-04-01

    Very accurate geo-location (geo-coding) of imagery taken at long range is a very large challenge. Whereas GPS can supply a very accurate sensor position, the hardware for the required precision pointing can have a very large cost. Roth, et al (2005) showed that because of the accuracy of lidar range-data, a tri-lateration method (called Multi-Look Lidar or Multi-Look Geo-Coding) can achieve very accurate geocoding at very long ranges and very low cost by using data-driven processing. This paper presents extensive flight-testing results using commercial airborne lidar. Because the tri-lateration method produces a large number of control points, the resulting accuracy of the geo-coded lidar data is somewhat better than that predicted for a single control point due to control-point averaging.

  3. Calibration, linearity, precision, and accuracy of a PIXE system

    NASA Astrophysics Data System (ADS)

    Richter, F.-W.; Wätjen, U.

    1984-04-01

    An accuracy and precision of better than 10% each can be achieved with PIXE analysis, with both thin and thick samples. Measures we took to obtain these values for routine analyses in the Marburg PIXE system are discussed. The advantages of an experimental calibration procedure, using thin evaporated standard foils, over the "absolute" method of employing X-ray production cross sections are outlined. The importance of X-ray line intensity ratios, even of weak transitions, for the accurate analysis of interfering elements of low mass content is demonstrated for the Se K α-Pb L ηline overlap. Matrix effects including secondary excitation can be corrected for very well without degrading accuracy under certain conditions.

  4. High-accuracy user identification using EEG biometrics.

    PubMed

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  5. Simultaneously improving the sensitivity and absolute accuracy of CPT magnetometer.

    PubMed

    Liang, Shang-Qing; Yang, Guo-Qing; Xu, Yun-Fei; Lin, Qiang; Liu, Zhi-Heng; Chen, Zheng-Xiang

    2014-03-24

    A new method to improve the sensitivity and absolute accuracy simultaneously for coherent population trapping (CPT) magnetometer based on the differential detection method is presented. Two modulated optical beams with orthogonal circular polarizations are applied, in one of which two magnetic resonances are excited simultaneously by modulating a 3.4GHz microwave with Larmor frequency. When a microwave frequency shift is introduced, the difference in the power transmitted through the cell in each beam shows a low noise resonance. The sensitivity of 2pT/Hz @ 10Hz is achieved. Meanwhile, the absolute accuracy of ± 0.5nT within the magnetic field ranging from 20000nT to 100000nT is realized.

  6. The Role of Feedback on Studying, Achievement and Calibration.

    ERIC Educational Resources Information Center

    Chu, Stephanie T. L.; Jamieson-Noel, Dianne L.; Winne, Philip H.

    One set of hypotheses examined in this study was that various types of feedback (outcome, process, and corrective) supply different information about performance and have different effects on studying processes and on achievement. Another set of hypotheses concerned students' calibration, their accuracy in predicting and postdicting achievement…

  7. Lifting Minority Achievement: Complex Answers. The Achievement Gap.

    ERIC Educational Resources Information Center

    Viadero, Debra; Johnston, Robert C.

    2000-01-01

    This fourth in a four-part series on why academic achievement gaps exist describes the Minority Achievement Committee scholars program at Shaker Heights High School in Cleveland, Ohio, a powerful antidote to the achievement gap between minority and white and Asian American students. It explains the need to break down stereotypes about academic…

  8. Achievement Motivation of Women: Effects of Achievement and Affiliation Arousal.

    ERIC Educational Resources Information Center

    Gama, Elizabeth Maria Pinheiro

    1985-01-01

    Assigned 139 Brazilian women to neutral, affiliation arousal, and achievement arousal conditions based on their levels of achievement (Ach) and affiliative (Aff) needs. Results of story analyses revealed that achievement arousal increased scores of high Ach subjects and that high Aff subjects obtained higher scores than low Aff subjects. (BL)

  9. Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement

    ERIC Educational Resources Information Center

    Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina

    2015-01-01

    The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…

  10. The Impact of Reading Achievement on Overall Academic Achievement

    ERIC Educational Resources Information Center

    Churchwell, Dawn Earheart

    2009-01-01

    This study examined the relationship between reading achievement and achievement in other subject areas. The purpose of this study was to determine if there was a correlation between reading scores as measured by the Standardized Test for the Assessment of Reading (STAR) and academic achievement in language arts, math, science, and social studies…

  11. Perceptual similarity in visual search for multiple targets.

    PubMed

    Gorbunova, Elena S

    2017-02-01

    Visual search for multiple targets can cause errors called subsequent search misses (SSM) - a decrease in accuracy at detecting a second target after a first target has been found. One of the possible explanations of SSM errors is perceptual set. After the first target has been found, the subjects become biased to find perceptually similar targets, therefore they are more likely to find perceptually similar targets and less likely to find the targets that are perceptually dissimilar. This study investigated the role of perceptual similarity in SSM errors. The search array in each trial consisted of 20 stimuli (ellipses and crosses, black and white, small and big, oriented horizontally and vertically), which could contain one, two or no targets. In case of two targets, the targets could have two, three or four shared features (in the last case the targets were identical). The error rate decreased with increasing the similarity between the targets. These results state the role of perceptual similarity and have implications for the perceptual set theory.

  12. Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories

    NASA Technical Reports Server (NTRS)

    Kay-Bunnell, Linda

    2004-01-01

    The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.

  13. Accuracy of rainfall measurement for scales of hydrological interest

    NASA Astrophysics Data System (ADS)

    Wood, S. J.; Jones, D. A.; Moore, R. J.

    The dense network of 49 raingauges over the 135 km2 Brue catchment in Somerset, England is used to examine the accuracy of rainfall estimates obtained from raingauges and from weather radar. Methods for data quality control and classification of precipitation types are first described. A super-dense network comprising eight gauges within a 2 km grid square is employed to obtain a "true value" of rainfall against which the 2 km radar grid and a single "typical gauge" estimate can be compared. Accuracy is assessed as a function of rainfall intensity, for different periods of time-integration (15 minutes, 1 hour and 1 day) and for two 8-gauge networks in areas of low and high relief. In a similar way, the catchment gauge network is used to provide the "true catchment rainfall" and the accuracy of a radar estimate (an area-weighted average of radar pixel values) and a single "typical gauge" estimate of catchment rainfall evaluated as a function of rainfall intensity. A single gauge gives a standard error of estimate for rainfall in a 2 km square and over the catchment of 33% and 65% respectively, at rain rates of 4 mm in 15 minutes. Radar data at 2 km resolution give corresponding errors of 50% and 55%. This illustrates the benefit of using radar when estimating catchment scale rainfall. A companion paper (Wood et al., 2000) considers the accuracy of rainfall estimates obtained using raingauge and radar in combination.

  14. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  15. The limitations of wind measurement accuracy for balloon systems

    NASA Technical Reports Server (NTRS)

    Luers, J. K.; Macarthur, C. D.

    1974-01-01

    The error in horizontal wind field measurements as computed from the trajectory of balloons with linear and quadratic rise rates (as functions of altitude) has been derived. Balloon trajectories through light, moderate, and severe wind fields have been considered. Figures are presented which show the wind error vs altitude for various rise rates in each wind field, assuming linear smoothing of the trajectory data. The rise rate profile of the Jimsphere is analyzed as a special case. The results and figures presented are useful in determining the ultimate capability of rising balloon systems in general and for the Jimsphere system in particular for measuring wind from the surface to 18 km. Using the figures presented, it is possible to estimate the wind accuracy that can be achieved by any type of rising balloon by knowing only its rise rate behavior vs altitude. In addition, the results can be used in balloon design to determine what rise rate function is needed to achieve specified wind accuracies. A table is presented which shows the balloon radius for smooth and roughened spheres needed to achieve 2 to 20 m/sec rise rates at 10 and 14 km altitudes.

  16. A Distance and Angle Similarity Measure Method.

    ERIC Educational Resources Information Center

    Zhang, Jin; Korfhage, Robert R.

    1999-01-01

    Discusses similarity measures that are used in information retrieval to improve precision and recall ratios and presents a combined vector-based distance and angle measure to make similarity measurement more scientific and accurate. Suggests directions for future research. (LRW)

  17. A vertex similarity index for better personalized recommendation

    NASA Astrophysics Data System (ADS)

    Chen, Ling-Jiao; Zhang, Zi-Ke; Liu, Jin-Hu; Gao, Jian; Zhou, Tao

    2017-01-01

    Recommender systems benefit us in tackling the problem of information overload by predicting our potential choices among diverse niche objects. So far, a variety of personalized recommendation algorithms have been proposed and most of them are based on similarities, such as collaborative filtering and mass diffusion. Here, we propose a novel vertex similarity index named CosRA, which combines advantages of both the cosine index and the resource-allocation (RA) index. By applying the CosRA index to real recommender systems including MovieLens, Netflix and RYM, we show that the CosRA-based method has better performance in accuracy, diversity and novelty than some benchmark methods. Moreover, the CosRA index is free of parameters, which is a significant advantage in real applications. Further experiments show that the introduction of two turnable parameters cannot remarkably improve the overall performance of the CosRA index.

  18. Thematic Relations Affect Similarity via Commonalities

    ERIC Educational Resources Information Center

    Golonka, Sabrina; Estes, Zachary

    2009-01-01

    Thematic relations are an important source of perceived similarity. For instance, the "rowing" theme of boats and oars increases their perceived similarity. The mechanism of this effect, however, has not been specified previously. The authors investigated whether thematic relations affect similarity by increasing commonalities or by…

  19. Achievements in Stratospheric Ozone Protection

    EPA Pesticide Factsheets

    This report describes achievements in protecting the ozone layer, the benefits of these achievements, and strategies involved (e.g., using alternatives to ozone-depleting substances, phasing out harmful substances, and creating partnerships).

  20. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  1. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  2. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    NASA Astrophysics Data System (ADS)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  3. ESG: extended similarity group method for automated protein function prediction

    PubMed Central

    Chitale, Meghana; Hawkins, Troy; Park, Changsoon; Kihara, Daisuke

    2009-01-01

    Motivation: Importance of accurate automatic protein function prediction is ever increasing in the face of a large number of newly sequenced genomes and proteomics data that are awaiting biological interpretation. Conventional methods have focused on high sequence similarity-based annotation transfer which relies on the concept of homology. However, many cases have been reported that simple transfer of function from top hits of a homology search causes erroneous annotation. New methods are required to handle the sequence similarity in a more robust way to combine together signals from strongly and weakly similar proteins for effectively predicting function for unknown proteins with high reliability. Results: We present the extended similarity group (ESG) method, which performs iterative sequence database searches and annotates a query sequence with Gene Ontology terms. Each annotation is assigned with probability based on its relative similarity score with the multiple-level neighbors in the protein similarity graph. We will depict how the statistical framework of ESG improves the prediction accuracy by iteratively taking into account the neighborhood of query protein in the sequence similarity space. ESG outperforms conventional PSI-BLAST and the protein function prediction (PFP) algorithm. It is found that the iterative search is effective in capturing multiple-domains in a query protein, enabling accurately predicting several functions which originate from different domains. Availability: ESG web server is available for automated protein function prediction at http://dragon.bio.purdue.edu/ESG/ Contact: cspark@cau.ac.kr; dkihara@purdue.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19435743

  4. Accuracy of Small Base Metal Dental Castings,

    DTIC Science & Technology

    1980-07-10

    aCCURACY OF SMALL BASE METAL DENTAL CASTINGS,(U) M JUL 80 E A HUBET, S 6 VERMILYEA, M .J KUFFLER UNCLASSIFIED NE7 hhhhh *EN UN~CLASSIFIED SECURITY...TPCCSI70NO. 3. RECIPIENT’S .CATALOG NUMBER I _% dSutte 5. TYPE OF REPORT & PERIOD COVERED Accuracy of Small Base Metal Dental Castings Manuscript S...base metal- alloys is countered by their inadequate casting accuracy . Until this problem can be overcome, the acceptance of such alloys for routine use

  5. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  6. Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement

    PubMed Central

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836

  7. Accuracy assessment of the integration of GNSS and a MEMS IMU in a terrestrial platform.

    PubMed

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A

    2014-11-04

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.

  8. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  9. Accuracy of direct genomic values in Holstein bulls and cows using subsets of SNP markers

    PubMed Central

    2010-01-01

    Background At the current price, the use of high-density single nucleotide polymorphisms (SNP) genotyping assays in genomic selection of dairy cattle is limited to applications involving elite sires and dams. The objective of this study was to evaluate the use of low-density assays to predict direct genomic value (DGV) on five milk production traits, an overall conformation trait, a survival index, and two profit index traits (APR, ASI). Methods Dense SNP genotypes were available for 42,576 SNP for 2,114 Holstein bulls and 510 cows. A subset of 1,847 bulls born between 1955 and 2004 was used as a training set to fit models with various sets of pre-selected SNP. A group of 297 bulls born between 2001 and 2004 and all cows born between 1992 and 2004 were used to evaluate the accuracy of DGV prediction. Ridge regression (RR) and partial least squares regression (PLSR) were used to derive prediction equations and to rank SNP based on the absolute value of the regression coefficients. Four alternative strategies were applied to select subset of SNP, namely: subsets of the highest ranked SNP for each individual trait, or a single subset of evenly spaced SNP, where SNP were selected based on their rank for ASI, APR or minor allele frequency within intervals of approximately equal length. Results RR and PLSR performed very similarly to predict DGV, with PLSR performing better for low-density assays and RR for higher-density SNP sets. When using all SNP, DGV predictions for production traits, which have a higher heritability, were more accurate (0.52-0.64) than for survival (0.19-0.20), which has a low heritability. The gain in accuracy using subsets that included the highest ranked SNP for each trait was marginal (5-6%) over a common set of evenly spaced SNP when at least 3,000 SNP were used. Subsets containing 3,000 SNP provided more than 90% of the accuracy that could be achieved with a high-density assay for cows, and 80% of the high-density assay for young bulls

  10. Tracking gaze while walking on a treadmill: spatial accuracy and limits of use of a stationary remote eye-tracker.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2014-01-01

    Inaccurate visual sampling and foot placement may lead to unsafe walking. Virtual environments, challenging obstacle negotiation, may be used to investigate the relationship between the point of gaze and stepping accuracy. A measurement of the point of gaze during walking can be obtained using a remote eye-tracker. The assessment of its performance and limits of applicability is essential to define the areas of interest in a virtual environment and to collect information for the analysis of the visual strategy. The current study aims at characterizing a gaze eye-tracker in static and dynamic conditions. Three different conditions were analyzed: a) looking at a single stimulus during selected head movements b) looking at multiple stimuli distributed on the screen from different distances, c) looking at multiple stimuli distributed on the screen while walking. The eye-tracker was able to measure the point of gaze during the head motion along medio-lateral and vertical directions consistently with the device specifications, while the tracking during the head motion along the anterior-posterior direction resulted to be lower than the device specifications. During head rotation around the vertical direction, the error of the point of gaze was lower than 23 mm. The best accuracy (10 mm) was achieved, consistently to the device specifications, in the static condition performed at 650 mm from the eye-tracker, while point of gaze data were lost while getting closer to the eye-tracker. In general, the accuracy and precision of the point of gaze did not show to be related to the stimulus position. During fast walking (1.1 m/s), the eye-tracker did not lose any data, since the head range of motion was always within the ranges of trackability. The values of accuracy and precision during walking were similar to those resulting from static conditions. These values will be considered in the definition of the size and shape of the areas of interest in the virtual environment.

  11. Accuracy of five electronic pedometers for measuring distance walked.

    PubMed

    Bassett, D R; Ainsworth, B E; Leggett, S R; Mathien, C A; Main, J A; Hunter, D C; Duncan, G E

    1996-08-01

    This is a three-part study that examined the accuracy of five brands of electronic pedometers (Freestyle Pacer, Eddie Bauer, L.L. Bean, Yamax, and Accusplit) under a variety of different conditions. In Part I, 20 subjects walked a 4.88-km sidewalk course while wearing two devices of the same brand (on the left and right side of the body) for each of five different trials. There were significant differences among pedometers (P < 0.05), with the Yamax, Pacer, and Accusplit approximating the actual distance more closely than the other models. The Yamax pedometers showed close agreement, but the left and right Pacer pedometers differed significantly (P = 0.0003) and the Accusplit displayed a similar trend (P = 0.0657). In Part II, the effects of walking surface on pedometer accuracy were examined. Ten of the original subjects completed an additional five trials around a 400-m rubberized outdoor track. The devices showed similar values for sidewalk and track surfaces. In Part III, the effects of walking speed on pedometer accuracy were examined. Ten different subjects walked on a treadmill at various speeds (54, 67, 80, 94, and 107 m.min-1). Pedometers that displayed both distance and number of steps were examined. The Yamax was more accurate than the Pacer and Eddie Bauer at slow-to-moderate speeds (P < 0.05), though no significant differences were seen at the fastest speed. While there are variations among brands in terms of accuracy, electronic pedometers may prove useful in recording walking activities in free-living populations.

  12. Similarity increases altruistic punishment in humans.

    PubMed

    Mussweiler, Thomas; Ockenfels, Axel

    2013-11-26

    Humans are attracted to similar others. As a consequence, social networks are homogeneous in sociodemographic, intrapersonal, and other characteristics--a principle called homophily. Despite abundant evidence showing the importance of interpersonal similarity and homophily for human relationships, their behavioral correlates and cognitive foundations are poorly understood. Here, we show that perceived similarity substantially increases altruistic punishment, a key mechanism underlying human cooperation. We induced (dis)similarity perception by manipulating basic cognitive mechanisms in an economic cooperation game that included a punishment phase. We found that similarity-focused participants were more willing to punish others' uncooperative behavior. This influence of similarity is not explained by group identity, which has the opposite effect on altruistic punishment. Our findings demonstrate that pure similarity promotes reciprocity in ways known to encourage cooperation. At the same time, the increased willingness to punish norm violations among similarity-focused participants provides a rationale for why similar people are more likely to build stable social relationships. Finally, our findings show that altruistic punishment is differentially involved in encouraging cooperation under pure similarity vs. in-group conditions.

  13. Surface-Based Protein Binding Pocket Similarity

    PubMed Central

    Spitzer, Russell; Cleves, Ann E.; Jain, Ajay N.

    2011-01-01

    Protein similarity comparisons may be made on a local or global basis and may consider sequence information or differing levels of structural information. We present a local 3D method that compares protein binding site surfaces in full atomic detail. The approach is based on the morphological similarity method which has been widely applied for global comparison of small molecules. We apply the method to all-by-all comparisons two sets of human protein kinases, a very diverse set of ATP-bound proteins from multiple species, and three heterogeneous benchmark protein binding site data sets. Cases of disagreement between sequence-based similarity and binding site similarity yield informative examples. Where sequence similarity is very low, high pocket similarity can reliably identify important binding motifs. Where sequence similarity is very high, significant differences in pocket similarity are related to ligand binding specificity and similarity. Local protein binding pocket similarity provides qualitatively complementary information to other approaches, and it can yield quantitative information in support of functional annotation. PMID:21769944

  14. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  15. Improving Delivery Accuracy of Stereotactic Body Radiotherapy to a Moving Tumor Using Simplified Volumetric Modulated Arc Therapy

    PubMed Central

    Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong

    2016-01-01

    Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199

  16. Fitting magnetic field gradient with Heisenberg-scaling accuracy

    PubMed Central

    Zhang, Yong-Liang; Wang, Huan; Jing, Li; Mu, Liang-Zhu; Fan, Heng

    2014-01-01

    The linear function is possibly the simplest and the most used relation appearing in various areas of our world. A linear relation can be generally determined by the least square linear fitting (LSLF) method using several measured quantities depending on variables. This happens for such as detecting the gradient of a magnetic field. Here, we propose a quantum fitting scheme to estimate the magnetic field gradient with N-atom spins preparing in W state. Our scheme combines the quantum multi-parameter estimation and the least square linear fitting method to achieve the quantum Cramér-Rao bound (QCRB). We show that the estimated quantity achieves the Heisenberg-scaling accuracy. Our scheme of quantum metrology combined with data fitting provides a new method in fast high precision measurements. PMID:25487218

  17. Techniques for improving overlay accuracy by using device correlated metrology targets as reference

    NASA Astrophysics Data System (ADS)

    Tzai, Wei Jhe; Hsu, Simon C. C.; Chen, Howard; Chen, Charlie; Pai, Yuan Chi; Yu, Chun-Chi; Lin, Chia Ching; Itzkovich, Tal; Yap, Lipkong; Amit, Eran; Tien, David; Huang, Eros; Kuo, Kelly T. L.; Amir, Nuriel

    2014-10-01

    The performance of overlay metrology as total measurement uncertainty, design rule compatibility, device correlation, and measurement accuracy has been challenged at the 2× nm node and below. The process impact on overlay metrology is becoming critical, and techniques to improve measurement accuracy become increasingly important. We present a methodology for improving the overlay accuracy. A propriety quality metric, Qmerit, is used to identify overlay metrology measurement settings with the least process impacts and reliable accuracies. Using the quality metric, a calibration method, Archer self-calibration, is then used to remove the inaccuracies. Accuracy validation can be achieved by correlation to reference overlay data from another independent metrology source such as critical dimension-scanning electron microscopy data collected on a device correlated metrology hybrid target or by electrical testing. Additionally, reference metrology can also be used to verify which measurement conditions are the most accurate. We provide an example of such a case.

  18. Innovative techniques for improving overlay accuracy by using DCM (device correlated metrology) targets as reference

    NASA Astrophysics Data System (ADS)

    Tzai, Wei-Jhe; Hsu, Simon C. C.; Chen, Howard; Chen, Charlie; Pai, Yuan Chi; Yu, Chun-Chi; Lin, Chia Ching; Itzkovich, Tal; Yap, Lipkong; Amit, Eran; Tien, David; Huang, Eros; Kuo, Kelly T. L.; Amir, Nuriel

    2014-04-01

    Overlay metrology performance as Total Measurement Uncertainty (TMU), design rule compatibility, device correlation and measurement accuracy are been challenged at 2x nm node and below. Process impact on overlay metrology becoming critical, and techniques to improve measurement accuracy becomes increasingly important. In this paper, we present an innovative methodology for improving overlay accuracy. A propriety quality metric, Qmerit, is used to identify overlay metrology measurement settings with least process impacts and reliable accuracies. Using the quality metric, an innovative calibration method, ASC (Archer Self Calibration) is then used to remove the inaccuracies. Accuracy validation can be achieved by correlation to reference overlay data from another independent metrology source such as CDSEM data collected on DCM (Device Correlated Metrology) hybrid target or electrical testing. Additionally, reference metrology can also be used to verify which measurement conditions are the most accurate. In this paper we bring an example of such use case.

  19. Towards J/mol Accuracy for the Cohesive Energy of Solid Argon.

    PubMed

    Schwerdtfeger, Peter; Tonner, Ralf; Moyano, Gloria E; Pahl, Elke

    2016-09-26

    The cohesive energies of argon in its cubic and hexagonal closed packed structures are computed with an unprecedented accuracy of about 5 J mol(-1) (corresponding to 0.05 % of the total cohesive energy). The same relative accuracy with respect to experimental data is also found for the face-centered cubic lattice constant deviating by ca. 0.003 Å. This level of accuracy was enabled by using high-level theoretical, wave-function-based methods within a many-body decomposition of the interaction energy. Static contributions of two-, three-, and four-body fragments of the crystal are all individually converged to sub-J mol(-1) accuracy and complemented by harmonic and anharmonic vibrational corrections. Computational chemistry is thus achieving or even surpassing experimental accuracy for the solid-state rare gases.

  20. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM.

    PubMed

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2015-01-01

    Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.

  1. Large-scale chemical similarity networks for target profiling of compounds identified in cell-based chemical screens.

    PubMed

    Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z

    2015-03-01

    Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60-70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/).

  2. Brain CT image similarity retrieval method based on uncertain location graph.

    PubMed

    Pan, Haiwei; Li, Pengyuan; Li, Qing; Han, Qilong; Feng, Xiaoning; Gao, Linlin

    2014-03-01

    A number of brain computed tomography (CT) images stored in hospitals that contain valuable information should be shared to support computer-aided diagnosis systems. Finding the similar brain CT images from the brain CT image database can effectively help doctors diagnose based on the earlier cases. However, the similarity retrieval for brain CT images requires much higher accuracy than the general images. In this paper, a new model of uncertain location graph (ULG) is presented for brain CT image modeling and similarity retrieval. According to the characteristics of brain CT image, we propose a novel method to model brain CT image to ULG based on brain CT image texture. Then, a scheme for ULG similarity retrieval is introduced. Furthermore, an effective index structure is applied to reduce the searching time. Experimental results reveal that our method functions well on brain CT images similarity retrieval with higher accuracy and efficiency.

  3. Neural Mechanisms of Speed-Accuracy Tradeoff

    PubMed Central

    Heitz, Richard P.; Schall, Jeffrey D.

    2012-01-01

    SUMMARY Intelligent agents balance speed of responding with accuracy of deciding. Stochastic accumulator models commonly explain this speed-accuracy tradeoff by strategic adjustment of response threshold. Several laboratories identify specific neurons in prefrontal and parietal cortex with this accumulation process, yet no neurophysiological correlates of speed-accuracy tradeoff have been described. We trained macaque monkeys to trade speed for accuracy on cue during visual search and recorded the activity of neurons in the frontal eye field. Unpredicted by any model, we discovered that speed-accuracy tradeoff is accomplished through several distinct adjustments. Visually responsive neurons modulated baseline firing rate, sensory gain, and the duration of perceptual processing. Movement neurons triggered responses with activity modulated in a direction opposite of model predictions. Thus, current stochastic accumulator models provide an incomplete description of the neural processes accomplishing speed-accuracy tradeoffs. The diversity of neural mechanisms was reconciled with the accumulator framework through an integrated accumulator model constrained by requirements of the motor system. PMID:23141072

  4. How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy.

    PubMed

    Weaver, Steven A; Ucar, Zennure; Bettinger, Pete; Merry, Krista

    2015-01-01

    understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection.

  5. Fuzzy similarity measures for detection and classification of defects in CFRP.

    PubMed

    Pellicanó, Diego; Palamara, Isabella; Cacciola, Matteo; Calcagno, Salvatore; Versaci, Mario; Morabito, Francesco Carlo

    2013-09-01

    The systematic use of nondestructive testing assumes a remarkable importance where on-line manufacturing quality control is associated with the maintenance of complex equipment. For this reason, nondestructive testing and evaluation (NDT/NDE), together with accuracy and precision of measurements of the specimen, results as a strategic activity in many fields of industrial and civil interest. It is well known that nondestructive research methodologies are able to provide information on the state of a manufacturing process without compromising its integrity and functionality. Moreover, exploitation of algorithms with a low computational complexity for detecting the integrity of a specimen plays a crucial role in real-time work. In such a context, the production of carbon fiber resin epoxy (CFRP) is a complex process that is not free from defects and faults that could compromise the integrity of the manufactured specimen. Ultrasonic tests provide an effective contribution in identifying the presence of a defect. In this work, a fuzzy similarity approach is proposed with the goal of localizing and classifying defects in CFRP in terms of a sort of distance among signals (measure of ultrasonic echoes). A field-programmable gate array (FPGA)-based board will be also presented which implements the described algorithms on a hardware device. The good performance of the detection and classification achieved assures the comparability of the results with the results obtained using heuristic techniques with a higher computational load.

  6. Notions of similarity for systems biology models.

    PubMed

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2016-10-14

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases.

  7. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  8. The energy-speed-accuracy tradeoff in sensory adaptation

    PubMed Central

    Lan, Ganhui; Sartori, Pablo; Neumann, Silke; Sourjik, Victor; Tu, Yuhai

    2012-01-01

    Adaptation is the essential process by which an organism becomes better suited to its environment. The benefits of adaptation are well documented, but the cost it incurs remains poorly understood. Here, by analysing a stochastic model of a minimum feedback network underlying many sensory adaptation systems, we show that adaptive processes are necessarily dissipative, and continuous energy consumption is required to stabilize the adapted state. Our study reveals a general relation among energy dissipation rate, adaptation speed and the maximum adaptation accuracy. This energy-speed-accuracy relation is tested in the Escherichia coli chemosensory system, which exhibits near-perfect chemoreceptor adaptation. We identify key requirements for the underlying biochemical network to achieve accurate adaptation with a given energy budget. Moreover, direct measurements confirm the prediction that adaptation slows down as cells gradually de-energize in a nutrient-poor medium without compromising adaptation accuracy. Our work provides a general framework to study cost-performance tradeoffs for cellular regulatory functions and information processing. PMID:22737175

  9. Evaluation of registration accuracy between Sentinel-2 and Landsat 8

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia

    2016-08-01

    Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).

  10. Accuracy of temperature measurement in the cardiopulmonary bypass circuit.

    PubMed

    Newland, Richard F; Sanderson, Andrew J; Baker, Robert A

    2005-03-01

    Oxygenator arterial outlet blood temperature is routinely measured in the cardiopulmonary bypass (CPB) circuit as a surrogate for the temperature of the arterial blood delivered to sensitive organs such as the brain. The aim of this study was to evaluate the accuracy of the temperature thermistors used in the Terumo Capiox SX25 oxygenator and to compare the temperature measured at the outlet of the oxygenator using the Capiox CX*TL Luer Thermistor with temperatures measured at distal sites. Five experimental stages were performed in vitro to achieve this aim. Under our experimental conditions, the luer thermistors accurately measured the temperature as referenced by a precision thermometer. In the CPB circuit, the difference between arterial outlet and reference thermometer temperature varied with outlet temperature over-reading at low temperatures and under reading at high temperatures. There was negligible heat loss (-0.4+/-0.1degrees C) measured at 4.5 m from the arterial outlet. The Terumo Capiox CX*TL Luer Thermistor is an accurate and reliable instrument for measuring temperature when incorporated into the Capiox Oxygenator. The accuracy in the measurement of temperature using these thermistors is affected by the thermistor immersion depth. Under reading of the arterial blood temperature by approximately 0.5 degrees C should be considered at normothermic temperatures, to avoid exceeding the maximum arterial blood temperature as described by institutional protocols. The accuracy of blood temperature measurements should be considered for all oxygenator arterial outlet temperature probes.

  11. Effect of traffic self-similarity on network performance

    NASA Astrophysics Data System (ADS)

    Park, Kihong; Kim, Gitae; Crovella, Mark E.

    1997-10-01

    Recent measurements of network traffic have shown that self- similarity is an ubiquitous phenomenon present in both local area and wide area traffic traces. In previous work, we have shown a simple, robust application layer causal mechanism of traffic self-similarity, namely, the transfer of files in a network system where the file size distributions are heavy- tailed. In this paper, we study the effect of scale- invariant burstiness on network performance when the functionality of the transport layer and the interaction of traffic sources sharing bounded network resources is incorporated. First, we show that transport layer mechanisms are important factors in translating the application layer causality into link traffic self-similarity. Network performance as captured by throughput, packet loss rate, and packet retransmission rate degrades gradually with increased heavy-tailedness while queueing delay, response time, and fairness deteriorate more drastically. The degree to which heavy-tailedness affects self-similarity is determined by how well congestion control is able to shape a source traffic into an on-average constant output stream while conserving information. Second, we show that increasing network resources such as link bandwidth and buffer capacity results in a superlinear improvement in performance. When large file transfers occur with nonnegligible probability, the incremental improvement in throughput achieved for large buffer sizes is accompanied by long queueing delays vis-a- vis the case when the file size distribution is not heavy- tailed. Buffer utilization continues to remain at a high level implying that further improvement in throughput is only achieved at the expense of a disproportionate increase in queueing delay. A similar trade-off relationship exists between queueing delay and packet loss rate, the curvature of the performance curve being highly sensitive to the degree of self-similarity. Third, we investigate the effect of congestion

  12. Stability of similarity measurements for bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on.

  13. Stability of similarity measurements for bipartite networks

    PubMed Central

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on. PMID:26725688

  14. Documents Similarity Measurement Using Field Association Terms.

    ERIC Educational Resources Information Center

    Atlam, El-Sayed; Fuketa, M.; Morita, K.; Aoe, Jun-ichi

    2003-01-01

    Discussion of text analysis and information retrieval and measurement of document similarity focuses on a new text manipulation system called FA (field association)-Sim that is useful for retrieving information in large heterogeneous texts and for recognizing content similarity in text excerpts. Discusses recall and precision, automatic indexing…

  15. Stability of similarity measurements for bipartite networks.

    PubMed

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-04

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on.

  16. Guaranteed classification via regularized similarity learning.

    PubMed

    Guo, Zheng-Chu; Ying, Yiming

    2014-03-01

    Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban (2012). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.

  17. Structure Mapping in Analogy and Similarity.

    ERIC Educational Resources Information Center

    Gentner, Dedre; Markman, Arthur B.

    1997-01-01

    It is suggested that both similarity and analogy involve a process of structural alignment and mapping. The structure mapping process is described as it has been worked out for analogy, and this view is then extended to similarity and used to generate new predictions. (SLD)

  18. Perceived Similarity, Proactive Adjustment, and Organizational Socialization

    ERIC Educational Resources Information Center

    Kammeyer-Mueller, John D.; Livingston, Beth A.; Liao, Hui

    2011-01-01

    The present study explores how perceived demographic and attitudinal similarity can influence proactive behavior among organizational newcomers. We propose that newcomers who perceive themselves as similar to their co-workers will be more willing to seek new information or build relationships, which in turn will lead to better long-term…

  19. Marking Student Programs Using Graph Similarity

    ERIC Educational Resources Information Center

    Naude, Kevin A.; Greyling, Jean H.; Vogts, Dieter

    2010-01-01

    We present a novel approach to the automated marking of student programming assignments. Our technique quantifies the structural similarity between unmarked student submissions and marked solutions, and is the basis by which we assign marks. This is accomplished through an efficient novel graph similarity measure ("AssignSim"). Our experiments…

  20. Mining Diagnostic Assessment Data for Concept Similarity

    ERIC Educational Resources Information Center

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  1. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  2. Criteria for dynamic similarity in bouncing gaits.

    PubMed

    Bullimore, Sharon R; Donelan, J Maxwell

    2008-01-21

    Animals of different sizes tend to move in a dynamically similar manner when travelling at speeds corresponding to equal values of a dimensionless parameter (DP) called the Froude number. Consequently, the Froude number has been widely used for defining equivalent speeds and predicting speeds of locomotion by extinct species and on other planets. However, experiments using simulated reduced gravity have demonstrated that equality of the Froude number does not guarantee dynamic similarity. This has cast doubt upon the usefulness of the Froude number in locomotion research. Here we use dimensional analysis of the planar spring-mass model, combined with Buckingham's Pi-Theorem, to demonstrate that four DPs must be equal for dynamic similarity in bouncing gaits such as trotting, hopping and bipedal running. This can be reduced to three DPs by applying the constraint of maintaining a constant average speed of locomotion. Sensitivity analysis indicates that all of these DPs are important for predicting dynamic similarity. We show that the reason humans do not run in a dynamically similar manner at equal Froude number in different levels of simulated reduced gravity is that dimensionless leg stiffness decreases as gravity increases. The reason that the Froude number can predict dynamic similarity in Earth gravity is that dimensionless leg stiffness and dimensionless vertical landing speed are both independent of size. In conclusion, although equal Froude number is not sufficient for dynamic similarity, it is a necessary condition. Therefore, to detect fundamental differences in locomotion, animals of different sizes should be compared at equal Froude number, so that they can be as close to dynamic similarity as possible. More generally, the concept of dynamic similarity provides a powerful framework within which similarities and differences in locomotion can be interpreted.

  3. Accuracy of stream habitat interpolations across spatial scales

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart

    2013-01-01

    Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.

  4. Biosocial Influences on Sex Differences in School Achievement.

    ERIC Educational Resources Information Center

    Fischbein, Siv

    Biosocial influences on sex differences, found for school achievement test results in grades 3 and 6, have been studied by means of opposite-sex twin pairs and singleton controls, attending the same classes as the twins. As expected the opposite-sex twin pairs tend to be more similar in achievement test results in Swedish and mathematics than…

  5. Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy.

    PubMed

    Aljabar, P; Heckemann, R A; Hammers, A; Hajnal, J V; Rueckert, D

    2009-07-01

    Quantitative research in neuroimaging often relies on anatomical segmentation of human brain MR images. Recent multi-atlas based approaches provide highly accurate structural segmentations of the brain by propagating manual delineations from multiple atlases in a database to a query subject and combining them. The atlas databases which can be used for these purposes are growing steadily. We present a framework to address the consequent problems of scale in multi-atlas segmentation. We show that selecting a custom subset of atlases for each query subject provides more accurate subcortical segmentations than those given by non-selective combination of random atlas subsets. Using a database of 275 atlases, we tested an image-based similarity criterion as well as a demographic criterion (age) in a leave-one-out cross-validation study. Using a custom ranking of the database for each subject, we combined a varying number n of atlases from the top of the ranked list. The resulting segmentations were compared with manual reference segmentations using Dice overlap. Image-based selection provided better segmentations than random subsets (mean Dice overlap 0.854 vs. 0.811 for the estimated optimal subset size, n=20). Age-based selection resulted in a similar marked improvement. We conclude that selecting atlases from large databases for atlas-based brain image segmentation improves the accuracy of the segmentations achieved. We show that image similarity is a suitable selection criterion and give results based on selecting atlases by age that demonstrate the value of meta-information for selection.

  6. Data supporting the high-accuracy haplotype imputation using unphased genotype data as the references.

    PubMed

    Li, Wenzhi; Xu, Wei; He, Shaohua; Ma, Li; Song, Qing

    2016-09-01

    The data presented in this article is related to the research article entitled "High-accuracy haplotype imputation using unphased genotype data as the references" which reports the unphased genotype data can be used as reference for haplotyping imputation [1]. This article reports different implementation generation pipeline, the results of performance comparison between different implementations (A, B, and C) and between HiFi and three major imputation software tools. Our data showed that the performances of these three implementations are similar on accuracy, in which the accuracy of implementation-B is slightly but consistently higher than A and C. HiFi performed better on haplotype imputation accuracy and three other software performed slightly better on genotype imputation accuracy. These data may provide a strategy for choosing optimal phasing pipeline and software for different studies.

  7. Accuracy of stone casts obtained by different impression materials.

    PubMed

    Faria, Adriana Cláudia Lapria; Rodrigues, Renata Cristina Silveira; Macedo, Ana Paula; Mattos, Maria da Gloria Chiarello de; Ribeiro, Ricardo Faria

    2008-01-01

    Several impression materials are available in the Brazilian marketplace to be used in oral rehabilitation. The aim of this study was to compare the accuracy of different impression materials used for fixed partial dentures following the manufacturers' instructions. A master model representing a partially edentulous mandibular right hemi-arch segment whose teeth were prepared to receive full crowns was used. Custom trays were prepared with auto-polymerizing acrylic resin and impressions were performed with a dental surveyor, standardizing the path of insertion and removal of the tray. Alginate and elastomeric materials were used and stone casts were obtained after the impressions. For the silicones, impression techniques were also compared. To determine the impression materials' accuracy, digital photographs of the master model and of the stone casts were taken and the discrepancies between them were measured. The data were subjected to analysis of variance and Duncan's complementary test. Polyether and addition silicone following the single-phase technique were statistically different from alginate, condensation silicone and addition silicone following the double-mix technique (p < or = .05), presenting smaller discrepancies. However, condensation silicone was similar (p > or = .05) to alginate and addition silicone following the double-mix technique, but different from polysulfide. The results led to the conclusion that different impression materials and techniques influenced the stone casts' accuracy in a way that polyether, polysulfide and addition silicone following the single-phase technique were more accurate than the other materials.

  8. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  9. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use.

  10. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  11. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  12. Evaluation of impression accuracy for a four-implant mandibular model--a digital approach.

    PubMed

    Stimmelmayr, Michael; Erdelt, Kurt; Güth, Jan-Frederik; Happe, Arndt; Beuer, Florian

    2012-08-01

    Implant-supported prosthodontics requires precise impressions to achieve a passive fit. Since the early 1990s, in vitro studies comparing different implant impression techniques were performed, capturing the data mostly mechanically. The purpose of this study was to evaluate the accuracy of three different impression techniques digitally. Dental implants were inserted bilaterally in ten polymer lower-arch models at the positions of the first molars and canines. From each original model, three different impressions (A, transfer; B, pick-up; and C, splinted pick-up) were taken. Scan-bodies were mounted on the implants of the polymer and on the lab analogues of the stone models and digitized. The scan-body in position 36 (FDI) of the digitized original and master casts were each superimposed, and the deviations of the remaining three scan-bodies were measured three-dimensionally. The systematic error of digitizing the models was 13 μm for the polymer and 5 μm for the stone model. The mean discrepancies of the original model to the stone casts were 124 μm (±34) μm for the transfer technique, 116 (±46) μm for the pick-up technique, and 80 (±25) μm for the splinted pick-up technique. There were statistically significant discrepancies between the evaluated impression techniques (p ≤ 0.025; ANOVA test). The splinted pick-up impression showed the least deviation between original and stone model; transfer and pick-up techniques showed similar results. For better accuracy of implant-supported prosthodontics, the splinted pick-up technique should be used for impressions of four implants evenly spread in edentulous jaws.

  13. Improvements in ECG accuracy for diagnosis of left ventricular hypertrophy in obesity

    PubMed Central

    Rider, Oliver J; Ntusi, Ntobeko; Bull, Sacha C; Nethononda, Richard; Ferreira, Vanessa; Holloway, Cameron J; Holdsworth, David; Mahmod, Masliza; Rayner, Jennifer J; Banerjee, Rajarshi; Myerson, Saul; Watkins, Hugh; Neubauer, Stefan

    2016-01-01

    Objectives The electrocardiogram (ECG) is the most commonly used tool to screen for left ventricular hypertrophy (LVH), and yet current diagnostic criteria are insensitive in modern increasingly overweight society. We propose a simple adjustment to improve diagnostic accuracy in different body weights and improve the sensitivity of this universally available technique. Methods Overall, 1295 participants were included—821 with a wide range of body mass index (BMI 17.1–53.3 kg/m2) initially underwent cardiac magnetic resonance evaluation of anatomical left ventricular (LV) axis, LV mass and 12-lead surface ECG in order to generate an adjustment factor applied to the Sokolow–Lyon criteria. This factor was then validated in a second cohort (n=520, BMI 15.9–63.2 kg/m2). Results When matched for LV mass, the combination of leftward anatomical axis deviation and increased BMI resulted in a reduction of the Sokolow–Lyon index, by 4 mm in overweight and 8 mm in obesity. After adjusting for this in the initial cohort, the sensitivity of the Sokolow–Lyon index increased (overweight: 12.8% to 30.8%, obese: 3.1% to 27.2%) approaching that seen in normal weight (37.8%). Similar results were achieved in the validation cohort (specificity increased in overweight: 8.3% to 39.1%, obese: 9.4% to 25.0%) again approaching normal weight (39.0%). Importantly, specificity remained excellent (>93.1%). Conclusions Adjusting the Sokolow–Lyon index for BMI (overweight +4 mm, obesity +8 mm) improves the diagnostic accuracy for detecting LVH. As the ECG, worldwide, remains the most widely used screening tool for LVH, implementing these findings should translate into significant clinical benefit. PMID:27486142

  14. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.

  15. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  16. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  17. EDUCATIONAL ACHIEVEMENT AND THE NAVAJO.

    ERIC Educational Resources Information Center

    HAAS, JOHN; MELVILLE, ROBERT

    A STUDY WAS DEVISED TO APPRAISE THE ACADEMIC ACHIEVEMENT OF NAVAJO STUDENTS LIVING IN DORMITORIES AWAY FROM THE INDIAN RESERVATION. THE FOLLOWING SEVEN FACTORS WERE CHOSEN TO BE INVESTIGATED AS BEING DIRECTLY RELATED TO ACHIEVEMENT--(1) INTELLIGENCE, (2) READING ABILITY, (3) ANXIETY, (4) SELF-CONCEPT, (5) MOTIVATION, (6) VERBAL DEVELOPMENT, (7)…

  18. Sociocultural Origins of Achievement Motivation

    ERIC Educational Resources Information Center

    Maehr, Martin L.

    1977-01-01

    Presents a theoretical review of work on sociocultural influences on achievement, focusing on a critical evaluation of the work of David McClellan. Offers an alternative conception of achievement motivation which stresses the role of contextual and situational factors in addition to personality factors. Available from: Transaction Periodicals…

  19. Raising Boys' Achievement in Schools.

    ERIC Educational Resources Information Center

    Bleach, Kevan, Ed.

    This book offers insights into the range of strategies and good practice being used to raise the achievement of boys. Case studies by school-based practitioners suggest ideas and measures to address the issue of achievement by boys. The contributions are: (1) "Why the Likely Lads Lag Behind" (Kevan Bleach); (2) "Helping Boys Do…

  20. Teaching the Low Level Achiever.

    ERIC Educational Resources Information Center

    Salomone, Ronald E., Ed.

    1986-01-01

    Intended for teachers of the English language arts, the articles in this issue offer suggestions and techniques for teaching the low level achiever. Titles and authors of the articles are as follows: (1) "A Point to Ponder" (Rachel Martin); (2) "Tracking: A Self-Fulfilling Prophecy of Failure for the Low Level Achiever" (James Christopher Davis);…

  1. Early Intervention and Student Achievement

    ERIC Educational Resources Information Center

    Hormes, Mridula T.

    2009-01-01

    The United States Department of Education has been rigorous in holding all states accountable with regard to student achievement. The No Child Left Behind Act of 2001 clearly laid out federal mandates for all schools to follow. K-12 leaders of public schools are very aware of the fact that results in terms of student achievement need to improve…

  2. Parental Involvement and Academic Achievement

    ERIC Educational Resources Information Center

    Goodwin, Sarah Christine

    2015-01-01

    This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…

  3. Asperger Syndrome and Academic Achievement.

    ERIC Educational Resources Information Center

    Griswold, Deborah E.; Barnhill, Gena P.; Myles, Brenda Smith; Hagiwara, Taku; Simpson, Richard L.

    2002-01-01

    A study focused on identifying the academic characteristics of 21 children and youth who have Asperger syndrome. Students had an extraordinary range of academic achievement scores, extending from significantly above average to far below grade level. Lowest achievement scores were shown for numerical operations, listening comprehension, and written…

  4. Perils of Standardized Achievement Testing

    ERIC Educational Resources Information Center

    Haladyna, Thomas M.

    2006-01-01

    This article argues that the validity of standardized achievement test-score interpretation and use is problematic; consequently, confidence and trust in such test scores may often be unwarranted. The problem is particularly severe in high-stakes situations. This essay provides a context for understanding standardized achievement testing, then…

  5. Stress Correlates and Academic Achievement.

    ERIC Educational Resources Information Center

    Bentley, Donna Anderson; And Others

    An ongoing concern for educators is the identification of factors that contribute to or are associated with academic achievement; one such group of variables that has received little attention are those involving stress. The relationship between perceived sources of stress and academic achievement was examined to determine if reactions to stress…

  6. School Size and Student Achievement

    ERIC Educational Resources Information Center

    Riggen, Vicki

    2013-01-01

    This study examined whether a relationship between high school size and student achievement exists in Illinois public high schools in reading and math, as measured by the Prairie State Achievement Exam (PSAE), which is administered to all Illinois 11th-grade students. This study also examined whether the factors of socioeconomic status, English…

  7. The baryonic self similarity of dark matter

    SciTech Connect

    Alard, C.

    2014-06-20

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M {sup 1/4}. These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  8. Measure of Node Similarity in Multilayer Networks

    PubMed Central

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H.; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  9. Gait Signal Analysis with Similarity Measure

    PubMed Central

    Shin, Seungsoo

    2014-01-01

    Human gait decision was carried out with the help of similarity measure design. Gait signal was selected through hardware implementation including all in one sensor, control unit, and notebook with connector. Each gait signal was considered as high dimensional data. Therefore, high dimensional data analysis was considered via heuristic technique such as the similarity measure. Each human pattern such as walking, sitting, standing, and stepping up was obtained through experiment. By the results of the analysis, we also identified the overlapped and nonoverlapped data relation, and similarity measure analysis was also illustrated, and comparison with conventional similarity measure was also carried out. Hence, nonoverlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considered high dimensional data analysis was designed with consideration of neighborhood information. Proposed similarity measure was applied to identify the behavior patterns of different persons, and different behaviours of the same person. Obtained analysis can be extended to organize health monitoring system for specially elderly persons. PMID:25110724

  10. Improved personalized recommendation based on a similarity network

    NASA Astrophysics Data System (ADS)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  11. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m-2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  12. Measurement accuracies in band-limited extrapolation

    NASA Technical Reports Server (NTRS)

    Kritikos, H. N.

    1982-01-01

    The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.

  13. The measurement accuracy of passive radon instruments.

    PubMed

    Beck, T R; Foerster, E; Buchröder, H; Schmidt, V; Döring, J

    2014-01-01

    This paper analyses the data having been gathered from interlaboratory comparisons of passive radon instruments over 10 y with respect to the measurement accuracy. The measurement accuracy is discussed in terms of the systematic and the random measurement error. The analysis shows that the systematic measurement error of the most instruments issued by professional laboratory services can be within a range of ±10 % from the true value. A single radon measurement has an additional random measurement error, which is in the range of up to ±15 % for high exposures to radon (>2000 kBq h m(-3)). The random measurement error increases for lower exposures. The analysis especially applies to instruments with solid-state nuclear track detectors and results in proposing criteria for testing the measurement accuracy. Instruments with electrets and charcoal have also been considered, but the low stock of data enables only a qualitative discussion.

  14. Why do delayed summaries improve metacomprehension accuracy?

    PubMed

    Anderson, Mary C M; Thiede, Keith W

    2008-05-01

    We showed that metacomprehension accuracy improved when participants (N=87 college students) wrote summaries of texts prior to judging their comprehension; however, accuracy only improved when summaries were written after a delay, not when written immediately after reading. We evaluated two hypotheses proposed to account for this delayed-summarization effect (the accessibility hypothesis and the situation model hypothesis). The data suggest that participants based metacomprehension judgments more on the gist of texts when they generated summaries after a delay; whereas, they based judgments more on details when they generated summaries immediately after reading. Focusing on information relevant to the situation model of a text (the gist of a text) produced higher levels of metacomprehension accuracy, which is consistent with situation model hypothesis.

  15. Increasing Accuracy and Increasing Tension in Ho

    NASA Astrophysics Data System (ADS)

    Freedman, Wendy L.

    2017-01-01

    The Hubble Constant, Ho, provides a measure of the current expansion rate of the universe. In recent decades, there has been a huge increase in the accuracy with which extragalactic distances, and hence Ho, can be measured. While the historical factor-of-two uncertainty in Ho has been resolved, a new discrepancy has arisen between the values of Ho measured in the local universe, and that estimated from cosmic microwave background measurements, assuming a Lambda cold dark matter model. I will review the advances that have led to the increase in accuracy in measurements of Ho, as well as describe exciting future prospects with the James Webb Space Telescope (JWST) and Gaia, which will make it feasible to measure extragalactic distances at percent-level accuracy in the next decade.

  16. Identification and classification of similar looking food grains

    NASA Astrophysics Data System (ADS)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  17. Accuracy assessment of seven global land cover datasets over China

    NASA Astrophysics Data System (ADS)

    Yang, Yongke; Xiao, Pengfeng; Feng, Xuezhi; Li, Haixing

    2017-03-01

    Land cover (LC) is the vital foundation to Earth science. Up to now, several global LC datasets have arisen with efforts of many scientific communities. To provide guidelines for data usage over China, nine LC maps from seven global LC datasets (IGBP DISCover, UMD, GLC, MCD12Q1, GLCNMO, CCI-LC, and GlobeLand30) were evaluated in this study. First, we compared their similarities and discrepancies in both area and spatial patterns, and analysed their inherent relations to data sources and classification schemes and methods. Next, five sets of validation sample units (VSUs) were collected to calculate their accuracy quantitatively. Further, we built a spatial analysis model and depicted their spatial variation in accuracy based on the five sets of VSUs. The results show that, there are evident discrepancies among these LC maps in both area and spatial patterns. For LC maps produced by different institutes, GLC 2000 and CCI-LC 2000 have the highest overall spatial agreement (53.8%). For LC maps produced by same institutes, overall spatial agreement of CCI-LC 2000 and 2010, and MCD12Q1 2001 and 2010 reach up to 99.8% and 73.2%, respectively; while more efforts are still needed if we hope to use these LC maps as time series data for model inputting, since both CCI-LC and MCD12Q1 fail to represent the rapid changing trend of several key LC classes in the early 21st century, in particular urban and built-up, snow and ice, water bodies, and permanent wetlands. With the highest spatial resolution, the overall accuracy of GlobeLand30 2010 is 82.39%. For the other six LC datasets with coarse resolution, CCI-LC 2010/2000 has the highest overall accuracy, and following are MCD12Q1 2010/2001, GLC 2000, GLCNMO 2008, IGBP DISCover, and UMD in turn. Beside that all maps exhibit high accuracy in homogeneous regions; local accuracies in other regions are quite different, particularly in Farming-Pastoral Zone of North China, mountains in Northeast China, and Southeast Hills. Special

  18. Similarities between catalase and cytosolic epoxide hydrolase.

    PubMed

    Guenthner, T M; Qato, M; Whalen, R; Glomb, S

    1989-01-01

    Cytosolic epoxide hydrolase, measured as trans-stilbene oxide hydrolase activity, was isolated and purified from human and guinea pig liver cytosol. Antiserum to the guinea pig liver preparation reacted strongly with bovine liver catalase. We determined that this lack of selectivity of the antiserum was due to catalase contamination of the epoxide hydrolase preparation. We also determined that several commercial catalase preparations are contaminated with cytosolic epoxide hydrolase. Our human epoxide hydrolase preparation contained no detectable catalase contamination, yet antiserum to this protein also cross-reacted slightly with catalase, indicating some intrinsic similarity between the two enzymes. We conclude that catalase and cytosolic epoxide hydrolase contain some similar immunogenic epitopes, and we surmise that similarities between the subunits of these two enzymes may lead to their partial copurification. Functional similarities between the two enzymes are also demonstrated, as several compounds that inhibit catalase are also shown to inhibit cytosolic epoxide hydrolase activity in the same concentration range and rank order.

  19. Evaluating Similarity Measures for Brain Image Registration.

    PubMed

    Razlighi, Q R; Kehtarnavaz, N; Yousefi, S

    2013-10-01

    Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration.

  20. Self-similarity in Laplacian growth

    SciTech Connect

    Mineev-weinstein, Mark; Zabrodin, Anton; Abanov, Artem

    2008-01-01

    We consider Laplacian Growth of self-similar domains in different geometries. Self-similarity determines the analytic structure of the Schwarz function of the moving boundary. The knowledge of this analytic structure allows us to derive the integral equation for the conformal map. It is shown that solutions to the integral equation obey also a second-order differential equation which is the 1D Schroedinger equation with the sinh{sup -2}-potential. The solutions, which are expressed through the Gauss hypergeometric function, characterize the geometry of self-similar patterns in a wedge. We also find the potential for the Coulomb gas representation of the self-similar Laplacian growth in a wedge and calculate the corresponding free energy.

  1. Heat transfer in geometrically similar cylinders

    NASA Technical Reports Server (NTRS)

    Riekert, P; Held, A

    1941-01-01

    The power and heat-stress conditions of geometrically similar engines are discussed. The advantages accruing from smaller cylinder dimensions are higher specific horsepower, lower weight per horsepower, lower piston temperature, and less frontal area, with reduced detonation tendency.

  2. Self-similarity in active colloid motion

    NASA Astrophysics Data System (ADS)

    Constant, Colin; Sukhov, Sergey; Dogariu, Aristide

    The self-similarity of displacements among randomly evolving systems has been used to describe the foraging patterns of animals and predict the growth of financial systems. At micron scales, the motion of colloidal particles can be analyzed by sampling their spatial displacement in time. For self-similar systems in equilibrium, the mean squared displacement increases linearly in time. However, external forces can take the system out of equilibrium, creating active colloidal systems, and making this evolution more complex. A moment scaling spectrum of the distribution of particle displacements quantifies the degree of self-similarity in the colloid motion. We will demonstrate that, by varying the temporal and spatial characteristics of the external forces, one can control the degree of self-similarity in active colloid motion.

  3. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    EPA Science Inventory

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  4. Breastfeeding and educational achievement at age 5.

    PubMed

    Heikkilä, Katriina; Kelly, Yvonne; Renfrew, Mary J; Sacker, Amanda; Quigley, Maria A

    2014-01-01

    Our aim was to investigate whether the duration of breastfeeding, at all or exclusively, is associated with educational achievement at age 5. We used data from a prospective, population-based UK cohort study, the Millennium Cohort Study (MCS). 5489 children from White ethnic background born at term in 2000-2001, attending school in England in 2006, were included in our analyses. Educational achievement was measured using the Foundation Stage Profile (FSP), a statutory assessment undertaken by teachers at the end of the child's first school year. Breastfeeding duration was ascertained from interviews with the mother when the child was 9 months old. We used modified Poisson's regression to model the association of breastfeeding duration with having reached a good level of achievement overall (≥78 overall points and ≥6 in 'personal, social and emotional development' and 'communication, language and literacy' points) and in specific areas (≥6 points) of development. Children who had been breastfed for up to 2 months were more likely to have reached a good level of overall achievement [adjusted rate ratio (RR): 1.09, 95% confidence interval (CI): 1.01, 1.19] than never breastfed children. This association was more marked in children breastfed for 2-4 months (adjusted RR: 1.17, 95% CI: 1.07, 1.29) and in those breastfed for longer than 4 months (adjusted RR: 1.16, 95% CI: 1.07, 1.26). The associations of exclusive breastfeeding with the educational achievement were similar. Our findings suggest that longer duration of breastfeeding, at all or exclusively, is associated with better educational achievement at age 5.

  5. Energy expenditure prediction via a footwear-based physical activity monitor: Accuracy and comparison to other devices

    NASA Astrophysics Data System (ADS)

    Dannecker, Kathryn

    2011-12-01

    Accurately estimating free-living energy expenditure (EE) is important for monitoring or altering energy balance and quantifying levels of physical activity. The use of accelerometers to monitor physical activity and estimate physical activity EE is common in both research and consumer settings. Recent advances in physical activity monitors include the ability to identify specific activities (e.g. stand vs. walk) which has resulted in improved EE estimation accuracy. Recently, a multi-sensor footwear-based physical activity monitor that is capable of achieving 98% activity identification accuracy has been developed. However, no study has compared the EE estimation accuracy for this monitor and compared this accuracy to other similar devices. Purpose . To determine the accuracy of physical activity EE estimation of a footwear-based physical activity monitor that uses an embedded accelerometer and insole pressure sensors and to compare this accuracy against a variety of research and consumer physical activity monitors. Methods. Nineteen adults (10 male, 9 female), mass: 75.14 (17.1) kg, BMI: 25.07(4.6) kg/m2 (mean (SD)), completed a four hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as three physical activity monitoring devices used in research: hip-mounted Actical and Actigraph accelerometers and a multi-accelerometer IDEEA device with sensors secured to the limb and chest. In addition, participants wore two consumer devices: Philips DirectLife and Fitbit. Each individual performed a series of randomly assigned and ordered postures/activities including lying, sitting (quietly and using a computer), standing, walking, stepping, cycling, sweeping, as well as a period of self-selected activities. We developed branched (i.e. activity specific) linear regression models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Results. The shoe

  6. Magnus expansion and in-medium similarity renormalization group

    NASA Astrophysics Data System (ADS)

    Morris, T. D.; Parzuchowski, N. M.; Bogner, S. K.

    2015-09-01

    We present an improved variant of the in-medium similarity renormalization group (IM-SRG) based on the Magnus expansion. In the new formulation, one solves flow equations for the anti-Hermitian operator that, upon exponentiation, yields the unitary transformation of the IM-SRG. The resulting flow equations can be solved using a first-order Euler method without any loss of accuracy, resulting in substantial memory savings and modest computational speedups. Since one obtains the unitary transformation directly, the transformation of additional operators beyond the Hamiltonian can be accomplished with little additional cost, in sharp contrast to the standard formulation of the IM-SRG. Ground state calculations of the homogeneous electron gas (HEG) and 16O nucleus are used as test beds to illustrate the efficacy of the Magnus expansion.

  7. A novel similarity comparison approach for dynamic ECG series.

    PubMed

    Yin, Hong; Zhu, Xiaoqian; Ma, Shaodong; Yang, Shuqiang; Chen, Liqian

    2015-01-01

    The heart sound signal is a reflection of heart and vascular system motion. Long-term continuous electrocardiogram (ECG) contains important information which can be helpful to prevent heart failure. A single piece of a long-term ECG recording usually consists of more than one hundred thousand data points in length, making it difficult to derive hidden features that may be reflected through dynamic ECG monitoring, which is also very time-consuming to analyze. In this paper, a Dynamic Time Warping based on MapReduce (MRDTW) is proposed to make prognoses of possible lesions in patients. Through comparison of a real-time ECG of a patient with the reference sets of normal and problematic cardiac waveforms, the experimental results reveal that our approach not only retains high accuracy, but also greatly improves the efficiency of the similarity measure in dynamic ECG series.

  8. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  9. Size-Dependent Accuracy of Nanoscale Thermometers.

    PubMed

    Alicki, Robert; Leitner, David M

    2015-07-23

    The accuracy of two classes of nanoscale thermometers is estimated in terms of size and system-dependent properties using the spin-boson model. We consider solid state thermometers, where the energy splitting is tuned by thermal properties of the material, and fluorescent organic thermometers, in which the fluorescence intensity depends on the thermal population of conformational states of the thermometer. The results of the theoretical model compare well with the accuracy reported for several nanothermometers that have been used to measure local temperature inside living cells.

  10. Estimation and Accuracy after Model Selection

    PubMed Central

    Efron, Bradley

    2013-01-01

    Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the Cp criterion, and a Lasso-based estimation problem. PMID:25346558

  11. Predictive accuracy in the neuroprediction of rearrest

    PubMed Central

    Aharoni, Eyal; Mallett, Joshua; Vincent, Gina M.; Harenski, Carla L.; Calhoun, Vince D.; Sinnott-Armstrong, Walter; Gazzaniga, Michael S.; Kiehl, Kent A.

    2014-01-01

    A recently published study by the present authors (Aharoni et al., 2013) reported evidence that functional changes in the anterior cingulate cortex (ACC) within a sample of 96 criminal offenders who were engaged in a Go/No-Go impulse control task significantly predicted their rearrest following release from prison. In an extended analysis, we use discrimination and calibration techniques to test the accuracy of these predictions relative to more traditional models and their ability to generalize to new observations in both full and reduced models. Modest to strong discrimination and calibration accuracy were found, providing additional support for the utility of neurobiological measures in predicting rearrest. PMID:24720689

  12. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  13. On distributional assumptions and whitened cosine similarities.

    PubMed

    Loog, Marco

    2008-06-01

    Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes the observation that some of the distributional assumptions made to derive this measure are very restrictive and, considered simultaneously, even inconsistent.

  14. Interlinguistic similarity and language death dynamics

    NASA Astrophysics Data System (ADS)

    Mira, J.; Paredes, Á.

    2005-03-01

    We analyze the time evolution of a system of two coexisting languages (Castillian Spanish and Galician, both spoken in northwest Spain) in the framework of a model given by Abrams and Strogatz (Nature 424 (2003) 900). It is shown that, contrary to the model's initial prediction, a stable bilingual situation is possible if the languages in competition are similar enough. Similarity is described with a simple parameter, whose value can be estimated from fits of the data.

  15. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    PubMed

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  16. Similarity searching in large combinatorial chemistry spaces

    NASA Astrophysics Data System (ADS)

    Rarey, Matthias; Stahl, Martin

    2001-06-01

    We present a novel algorithm, called Ftrees-FS, for similarity searching in large chemistry spaces based on dynamic programming. Given a query compound, the algorithm generates sets of compounds from a given chemistry space that are similar to the query. The similarity search is based on the feature tree similarity measure representing molecules by tree structures. This descriptor allows handling combinatorial chemistry spaces as a whole instead of looking at subsets of enumerated compounds. Within few minutes of computing time, the algorithm is able to find the most similar compound in very large spaces as well as sets of compounds at an arbitrary similarity level. In addition, the diversity among the generated compounds can be controlled. A set of 17 000 fragments of known drugs, generated by the RECAP procedure from the World Drug Index, was used as the search chemistry space. These fragments can be combined to more than 1018 compounds of reasonable size. For validation, known antagonists/inhibitors of several targets including dopamine D4, histamine H1, and COX2 are used as queries. Comparison of the compounds created by Ftrees-FS to other known actives demonstrates the ability of the method to jump between structurally unrelated molecule classes.

  17. Distorting limb design for dynamically similar locomotion.

    PubMed Central

    Bullimore, Sharon R.; Burn, Jeremy F.

    2004-01-01

    Terrestrial mammals of different sizes tend to move in a dynamically similar manner when travelling at speeds corresponding to equal values of the Froude number. This means that certain dimensionless locomotor parameters, including peak vertical ground reaction force relative to body weight, stride length relative to leg length and duty factor, are independent of animal size. The Froude number is consequently used to define equivalent speeds for mammals of different sizes. However, most musculoskeletal-tissue properties, including tendon elastic modulus, do not scale in a dynamically similar manner. Therefore, mammals could not be completely dynamically similar, even if perfectly geometrically similar. We argue that, for mammals to move in a dynamically similar manner, they must exhibit systematic 'distortions' of limb structure with size that compensate for the size independence of the tendon elastic modulus. An implication of this is that comparing mammals at equal Froude numbers cannot remove all size-dependent effects. We show that the previously published allometry of limb moment arms is sufficient to compensate for size-independent tendon properties. This suggests that it is an important factor in allowing mammals of different sizes to move in a dynamically similar manner. PMID:15058440

  18. Description of interest regions with oriented local self-similarity

    NASA Astrophysics Data System (ADS)

    Liu, Jingneng; Zeng, Guihua

    2012-05-01

    Two novel approaches for extracting distinctive invariant features from interest regions are presented in this paper, i.e., Oriented Local Self-Similarities (OLSS,C) and Simplified and Oriented Local Self-Similarities (SOLSS,C) based on Cartesian location grid and gradient orientation for binning, which are the modified versions of the well-known Local Self-Similarities (LSS,LP) feature based on Log-Polar location grid. They combine the powers of well-known approaches, i.e., the SIFT and the LSS (LP), and are achieved by adopting the SIFT algorithm and using the novel LSS and the proposed simplified LSS feature instead of original gradient feature used in SIFT. Furthermore, a new binning strategy for creating feature histogram is proposed where the gradient orientation for binning is calculated from a larger patch in the diagonal direction. The performance of these oriented OLSS (C) and SOLSS (C) descriptors to image matching is studied through extensive experiments on the INRIA Oxford Affine dataset. Empirical results indicate that the proposed OLSS (C) and SOLSS (C) descriptors yield more stable and robust results, significantly outperform the original LSS (LP) descriptor, and also achieve better performance to the SIFT in these experimental evaluations with various geometric and photometric transformations.

  19. The Accuracy of Data Collected by Surgical Residents

    PubMed Central

    Shetty, Vivek; Murphy, Debra A.; Zigler, Cory; Resell, Judith; Yamashita, Dennis Duke

    2008-01-01

    BACKGROUND Clinician records are the primary information source for assessing of the quality of facial injury care, billing, risk management, planning of health services, and health-system management and reporting. Inaccuracies obscure outcomes assessment and affect the planning of health services. OBJECTIVES We sought to determine the accuracy of the clinician collected data by comparing them to similar information elicited by professional interviewers. METHODS We abstracted admissions data from the medical records of 185 patients treated for orofacial injury between January 2005 and January 2007. Clinician data on sociodemographics and substance use was compared to similar information elicited by trained research staff as part of a prospective study. RESULTS The accuracy of the clinician data sets varied considerably depending on the variable. Concordance with the interviewer data sets was highest for age (paired t-test p=.09), sex (κ = 1) and ethnicity (κ = .84) but dropped off considerably for marital status (κ = .22) and alcohol (κ = .18) and drug use (κ = .16). The missing data per variable ranged from 4.5% (gender) to 46.9% (employment and education). CONCLUSIONS Although more research is needed to evaluate the cause of inaccuracies and the relative contributions of patient, provider, and system level effects, it appears that significant inaccuracies in administrative data are common. Interventions aimed at identifying the sources and correcting these errors are necessary. PMID:18571014

  20. Theoretical Accuracy of Along-Track Displacement Measurements from Multiple-Aperture Interferometry (MAI)

    PubMed Central

    Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei

    2014-01-01

    The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408

  1. Semantic similarity between ontologies at different scales

    SciTech Connect

    Zhang, Qingpeng; Haglin, David J.

    2016-04-01

    In the past decade, existing and new knowledge and datasets has been encoded in different ontologies for semantic web and biomedical research. The size of ontologies is often very large in terms of number of concepts and relationships, which makes the analysis of ontologies and the represented knowledge graph computational and time consuming. As the ontologies of various semantic web and biomedical applications usually show explicit hierarchical structures, it is interesting to explore the trade-offs between ontological scales and preservation/precision of results when we analyze ontologies. This paper presents the first effort of examining the capability of this idea via studying the relationship between scaling biomedical ontologies at different levels and the semantic similarity values. We evaluate the semantic similarity between three Gene Ontology slims (Plant, Yeast, and Candida, among which the latter two belong to the same kingdom—Fungi) using four popular measures commonly applied to biomedical ontologies (Resnik, Lin, Jiang-Conrath, and SimRel). The results of this study demonstrate that with proper selection of scaling levels and similarity measures, we can significantly reduce the size of ontologies without losing substantial detail. In particular, the performance of Jiang-Conrath and Lin are more reliable and stable than that of the other two in this experiment, as proven by (a) consistently showing that Yeast and Candida are more similar (as compared to Plant) at different scales, and (b) small deviations of the similarity values after excluding a majority of nodes from several lower scales. This study provides a deeper understanding of the application of semantic similarity to biomedical ontologies, and shed light on how to choose appropriate semantic similarity measures for biomedical engineering.

  2. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  3. Activity-relevant similarity values for fingerprints and implications for similarity searching

    PubMed Central

    Jasial, Swarit; Hu, Ye; Vogt, Martin; Bajorath, Jürgen

    2016-01-01

    A largely unsolved problem in chemoinformatics is the issue of how calculated compound similarity relates to activity similarity, which is central to many applications. In general, activity relationships are predicted from calculated similarity values. However, there is no solid scientific foundation to bridge between calculated molecular and observed activity similarity. Accordingly, the success rate of identifying new active compounds by similarity searching is limited. Although various attempts have been made to establish relationships between calculated fingerprint similarity values and biological activities, none of these has yielded generally applicable rules for similarity searching. In this study, we have addressed the question of molecular versus activity similarity in a more fundamental way. First, we have evaluated if activity-relevant similarity value ranges could in principle be identified for standard fingerprints and distinguished from similarity resulting from random compound comparisons. Then, we have analyzed if activity-relevant similarity values could be used to guide typical similarity search calculations aiming to identify active compounds in databases. It was found that activity-relevant similarity values can be identified as a characteristic feature of fingerprints. However, it was also shown that such values cannot be reliably used as thresholds for practical similarity search calculations. In addition, the analysis presented herein helped to rationalize differences in fingerprint search performance. PMID:27127620

  4. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  5. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  6. Adult Metacomprehension: Judgment Processes and Accuracy Constraints

    ERIC Educational Resources Information Center

    Zhao, Qin; Linderholm, Tracy

    2008-01-01

    The objective of this paper is to review and synthesize two interrelated topics in the adult metacomprehension literature: the bases of metacomprehension judgment and the constraints on metacomprehension accuracy. Our review shows that adult readers base their metacomprehension judgments on different types of information, including experiences…

  7. Task Speed and Accuracy Decrease When Multitasking

    ERIC Educational Resources Information Center

    Lin, Lin; Cockerham, Deborah; Chang, Zhengsi; Natividad, Gloria

    2016-01-01

    As new technologies increase the opportunities for multitasking, the need to understand human capacities for multitasking continues to grow stronger. Is multitasking helping us to be more efficient? This study investigated the multitasking abilities of 168 participants, ages 6-72, by measuring their task accuracy and completion time when they…

  8. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  9. Least squares polynomial fits and their accuracy

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1977-01-01

    Equations are presented which attempt to fit least squares polynomials to tables of date. It is concluded that much data are needed to reduce the measurement error standard deviation by a significant amount, however at certain points great accuracy is attained.

  10. A microwave position sensor with submillimeter accuracy

    NASA Astrophysics Data System (ADS)

    Stelzer, A.; Diskus, C. G.; Lubke, K.; Thim, H. W.

    1999-12-01

    Design and characteristics of a prototype distance sensor are presented in this paper. The radar front-end operates at 35 GHz and applies six-port technology and direct frequency measurement. The sensor makes use of both frequency-modulated continuous wave and interferometer principles and is capable of measuring distance with a very high accuracy of ±0.1 mm.

  11. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  12. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  13. Direct Behavior Rating: Considerations for Rater Accuracy

    ERIC Educational Resources Information Center

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  14. Accuracy of Depth to Water Measurements

    EPA Pesticide Factsheets

    Accuracy of depth to water measurements is an issue identified by the Forum as a concern of Superfund decision-makers as they attempt to determine directions of ground-water flow, areas of recharge or discharge, the hydraulic characteristics of...

  15. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  16. Bayesian Methods for Medical Test Accuracy

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    Bayesian methods for medical test accuracy are presented, beginning with the basic measures for tests with binary scores: true positive fraction, false positive fraction, positive predictive values, and negative predictive value. The Bayesian approach is taken because of its efficient use of prior information, and the analysis is executed with a Bayesian software package WinBUGS®. The ROC (receiver operating characteristic) curve gives the intrinsic accuracy of medical tests that have ordinal or continuous scores, and the Bayesian approach is illustrated with many examples from cancer and other diseases. Medical tests include X-ray, mammography, ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine and tests based on biomarkers, such as blood glucose values for diabetes. The presentation continues with more specialized methods suitable for measuring the accuracies of clinical studies that have verification bias, and medical tests without a gold standard. Lastly, the review is concluded with Bayesian methods for measuring the accuracy of the combination of two or more tests. PMID:26859485

  17. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  18. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  19. Method for measuring centroid algorithm accuracy

    NASA Technical Reports Server (NTRS)

    Klein, S.; Liewer, K.

    2002-01-01

    This paper will describe such a method for measuring the accuracy of centroid algorithms using a relatively inexpensive setup consisting of a white light source, lenses, a CCD camea, an electro-strictive actuator, and a DAC (Digital-to-Analog Converter), and employing embedded PowerPC, VxWorks, and Solaris based software.

  20. High accuracy gaseous x-ray detectors

    SciTech Connect

    Smith, G.C.

    1983-11-01

    An outline is given of the design and operation of high accuracy position-sensitive x-ray detectors suitable for experiments using synchrotron radiation. They are based on the gas proportional detector, with position readout using a delay line; a detailed examination is made of factors which limit spatial resolution. Individual wire readout may be used for extremely high counting rates.

  1. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  2. Childhood Obesity and Cognitive Achievement.

    PubMed

    Black, Nicole; Johnston, David W; Peeters, Anna

    2015-09-01

    Obese children tend to perform worse academically than normal-weight children. If poor cognitive achievement is truly a consequence of childhood obesity, this relationship has significant policy implications. Therefore, an important question is to what extent can this correlation be explained by other factors that jointly determine obesity and cognitive achievement in childhood? To answer this question, we exploit a rich longitudinal dataset of Australian children, which is linked to national assessments in math and literacy. Using a range of estimators, we find that obesity and body mass index are negatively related to cognitive achievement for boys but not girls. This effect cannot be explained by sociodemographic factors, past cognitive achievement or unobserved time-invariant characteristics and is robust to different measures of adiposity. Given the enormous importance of early human capital development for future well-being and prosperity, this negative effect for boys is concerning and warrants further investigation.

  3. Using Design To Achieve Sustainability

    EPA Science Inventory

    Sustainability is defined as meeting the needs of this generation without compromising the ability of future generations to meet their needs. This is a conditional statement that places the responsibility for achieving sustainability squarely in hands of designers and planners....

  4. Similarity Metrics for Closed Loop Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.

    2008-01-01

    To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and

  5. Absolute distance measurement with micrometer accuracy using a Michelson interferometer and the iterative synthetic wavelength principle.

    PubMed

    Alzahrani, Khaled; Burton, David; Lilley, Francis; Gdeisat, Munther; Bezombes, Frederic; Qudeisat, Mohammad

    2012-02-27

    We present a novel system that can measure absolute distances of up to 300 mm with an uncertainty of the order of one micrometer, within a timeframe of 40 seconds. The proposed system uses a Michelson interferometer, a tunable laser, a wavelength meter and a computer for analysis. The principle of synthetic wave creation is used in a novel way in that the system employs an initial low precision estimate of the distance, obtained using a triangulation, or time-of-flight, laser system, or similar, and then iterates through a sequence of progressively smaller synthetic wavelengths until it reaches micrometer uncertainties in the determination of the distance. A further novel feature of the system is its use of Fourier transform phase analysis techniques to achieve sub-wavelength accuracy. This method has the major advantages of being relatively simple to realize, offering demonstrated high relative precisions better than 5 × 10(-5). Finally, the fact that this device does not require a continuous line-of-sight to the target as is the case with other configurations offers significant advantages.

  6. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  7. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  8. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy.

    PubMed

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2014-01-20

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  9. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  10. 3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

    PubMed Central

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2016-01-01

    An algorithm for intensity-based 3D–2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ~0°–180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ~10°–20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration. PMID:24351769

  11. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  12. Medial Patellofemoral Ligament Reconstruction Femoral Tunnel Accuracy

    PubMed Central

    Hiemstra, Laurie A.; Kerslake, Sarah; Lafave, Mark

    2017-01-01

    Background: Medial patellofemoral ligament (MPFL) reconstruction is a procedure aimed to reestablish the checkrein to lateral patellar translation in patients with symptomatic patellofemoral instability. Correct femoral tunnel position is thought to be crucial to successful MPFL reconstruction, but the accuracy of this statement in terms of patient outcomes has not been tested. Purpose: To assess the accuracy of femoral tunnel placement in an MPFL reconstruction cohort and to determine the correlation between tunnel accuracy and a validated disease-specific, patient-reported quality-of-life outcome measure. Study Design: Case series; Level of evidence, 4. Methods: Between June 2008 and February 2014, a total of 206 subjects underwent an MPFL reconstruction. Lateral radiographs were measured to determine the accuracy of the femoral tunnel by measuring the distance from the center of the femoral tunnel to the Schöttle point. Banff Patella Instability Instrument (BPII) scores were collected a mean 24 months postoperatively. Results: A total of 155 (79.5%) subjects had adequate postoperative lateral radiographs and complete BPII scores. The mean duration of follow-up (±SD) was 24.4 ± 8.2 months (range, 12-74 months). Measurement from the center of the femoral tunnel to the Schöttle point resulted in 143 (92.3%) tunnels being categorized as “good” or “ideal.” There were 8 failures in the cohort, none of which occurred in malpositioned tunnels. The mean distance from the center of the MPFL tunnel to the center of the Schöttle point was 5.9 ± 4.2 mm (range, 0.5-25.9 mm). The mean postoperative BPII score was 65.2 ± 22.5 (range, 9.2-100). Pearson r correlation demonstrated no statistically significant relationship between accuracy of femoral tunnel position and BPII score (r = –0.08; 95% CI, –0.24 to 0.08). Conclusion: There was no evidence of a correlation between the accuracy of MPFL reconstruction femoral tunnel in relation to the Schöttle point and

  13. Achieving Efficiencies in Army Installations.

    DTIC Science & Technology

    2007-11-02

    34" ’■■"■" 1 USAWC STRATEGY RESEARCH PROJECT Achieving Efficiencies in Army Installations by Richard Fliss Col. Richard M. Meinhart Project...government agency. STRATEGY RESEARCH PROJECT ACHIEVING EFFICIENCIES IN ARMY INSTALLATIONS BY RICHARD FLISS DISTRIBUTION STATEMENT A: Approved...for public release. Distribution is unlimited. DTIC QUALITY INSPECTED & USAWC CLASS OF 1998 U.S. ARMY WAR COLLEGE, CARLISLE BARRACKS, PA 17013-5050

  14. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  15. Efficient Video Similarity Measurement and Search

    SciTech Connect

    Cheung, Sen-ching S.

    2002-12-19

    The amount of information on the world wide web has grown enormously since its creation in 1990. Duplication of content is inevitable because there is no central management on the web. Studies have shown that many similar versions of the same text documents can be found throughout the web. This redundancy problem is more severe for multimedia content such as web video sequences, as they are often stored in multiple locations and different formats to facilitate downloading and streaming. Similar versions of the same video can also be found, unknown to content creators, when web users modify and republish original content using video editing tools. Identifying similar content can benefit many web applications and content owners. For example, it will reduce the number of similar answers to a web search and identify inappropriate use of copyright content. In this dissertation, they present a system architecture and corresponding algorithms to efficiently measure, search, and organize similar video sequences found on any large database such as the web.

  16. Geometric similarity between protein-RNA interfaces.

    PubMed

    Zhou, Peng; Zou, Jianwei; Tian, Feifei; Shang, Zhicai

    2009-12-01

    A new method is described to measure the geometric similarity between protein-RNA interfaces quantitatively. The method is based on a procedure that dissects the interface geometry in terms of the spatial relationships between individual amino acid nucleotide pairs. Using this technique, we performed an all-on-all comparison of 586 protein-RNA interfaces deposited in the current Protein Data Bank, as the result, an interface-interface similarity score matrix was obtained. Based upon this matrix, hierarchical clustering was carried out which yielded a complete clustering tree for the 586 protein-RNA interfaces. By investigating the organizing behavior of the clustering tree and the SCOP classification of protein partners in complexes, a geometrically nonredundant, diverse data set (representative data set) consisting of 45 distinct protein-RNA interfaces was extracted for the purpose of studying protein-RNA interactions, RNA regulations, and drug design. We classified protein-RNA interfaces into three types. In type I, the families and interface structural classes of the protein partners, as well as the interface geometries are all similar. In type II, the interface geometries and the interface structural classes are similar, whereas the protein families are different. In type III, only the interface geometries are similar but the protein families and the interface structural classes are distinct. Furthermore, we also show two new RNA recognition themes derived from the representative data set.

  17. Accuracy of genotype imputation in sheep breeds.

    PubMed

    Hayes, B J; Bowman, P J; Daetwyler, H D; Kijas, J W; van der Werf, J H J

    2012-02-01

    Although genomic selection offers the prospect of improving the rate of genetic gain in meat, wool and dairy sheep breeding programs, the key constraint is likely to be the cost of genotyping. Potentially, this constraint can be overcome by genotyping selection candidates for a low density (low cost) panel of SNPs with sparse genotype coverage, imputing a much higher density of SNP genotypes using a densely genotyped reference population. These imputed genotypes would then be used with a prediction equation to produce genomic estimated breeding values. In the future, it may also be desirable to impute very dense marker genotypes or even whole genome re-sequence data from moderate density SNP panels. Such a strategy could lead to an accurate prediction of genomic estimated breeding values across breeds, for example. We used genotypes from 48 640 (50K) SNPs genotyped in four sheep breeds to investigate both the accuracy of imputation of the 50K SNPs from low density SNP panels, as well as prospects for imputing very dense or whole genome re-sequence data from the 50K SNPs (by leaving out a small number of the 50K SNPs at random). Accuracy of imputation was low if the sparse panel had less than 5000 (5K) markers. Across breeds, it was clear that the accuracy of imputing from sparse marker panels to 50K was higher if the genetic diversity within a breed was lower, such that relationships among animals in that breed were higher. The accuracy of imputation from sparse genotypes to 50K genotypes was higher when the imputation was performed within breed rather than when pooling all the data, despite the fact that the pooled reference set was much larger. For Border Leicesters, Poll Dorsets and White Suffolks, 5K sparse genotypes were sufficient to impute 50K with 80% accuracy. For Merinos, the accuracy of imputing 50K from 5K was lower at 71%, despite a large number of animals with full genotypes (2215) being used as a reference. For all breeds, the relationship of

  18. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  19. Humans process dog and human facial affect in similar ways.

    PubMed

    Schirmer, Annett; Seow, Cui Shan; Penney, Trevor B

    2013-01-01

    Humans share aspects of their facial affect with other species such as dogs. Here we asked whether untrained human observers with and without dog experience are sensitive to these aspects and recognize dog affect with better-than-chance accuracy. Additionally, we explored similarities in the way observers process dog and human expressions. The stimulus material comprised naturalistic facial expressions of pet dogs and human infants obtained through positive (i.e., play) and negative (i.e., social isolation) provocation. Affect recognition was assessed explicitly in a rating task using full face images and images cropped to reveal the eye region only. Additionally, affect recognition was assessed implicitly in a lexical decision task using full faces as primes and emotional words and pseudowords as targets. We found that untrained human observers rated full face dog expressions from the positive and negative condition more accurately than would be expected by chance. Although dog experience was unnecessary for this effect, it significantly facilitated performance. Additionally, we observed a range of similarities between human and dog face processing. First, the facial expressions of both species facilitated lexical decisions to affectively congruous target words suggesting that their processing was equally automatic. Second, both dog and human negative expressions were recognized from both full and cropped faces. Third, female observers were more sensitive to affective information than were male observers and this difference was comparable for dog and human expressions. Together, these results extend existing work on cross-species similarities in facial emotions and provide evidence that these similarities are naturally exploited when humans interact with dogs.

  20. Humans Process Dog and Human Facial Affect in Similar Ways

    PubMed Central

    Schirmer, Annett; Seow, Cui Shan; Penney, Trevor B.

    2013-01-01

    Humans share aspects of their facial affect with other species such as dogs. Here we asked whether untrained human observers with and without dog experience are sensitive to these aspects and recognize dog affect with better-than-chance accuracy. Additionally, we explored similarities in the way observers process dog and human expressions. The stimulus material comprised naturalistic facial expressions of pet dogs and human infants obtained through positive (i.e., play) and negative (i.e., social isolation) provocation. Affect recognition was assessed explicitly in a rating task using full face images and images cropped to reveal the eye region only. Additionally, affect recognition was assessed implicitly in a lexical decision task using full faces as primes and emotional words and pseudowords as targets. We found that untrained human observers rated full face dog expressions from the positive and negative condition more accurately than would be expected by chance. Although dog experience was unnecessary for this effect, it significantly facilitated performance. Additionally, we observed a range of similarities between human and dog face processing. First, the facial expressions of both species facilitated lexical decisions to affectively congruous target words suggesting that their processing was equally automatic. Second, both dog and human negative expressions were recognized from both full and cropped faces. Third, female observers were more sensitive to affective information than were male observers and this difference was comparable for dog and human expressions. Together, these results extend existing work on cross-species similarities in facial emotions and provide evidence that these similarities are naturally exploited when humans interact with dogs. PMID:24023954

  1. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  2. Dreaming and waking: similarities and differences revisited.

    PubMed

    Kahan, Tracey L; LaBerge, Stephen P

    2011-09-01

    Dreaming is often characterized as lacking high-order cognitive (HOC) skills. In two studies, we test the alternative hypothesis that the dreaming mind is highly similar to the waking mind. Multiple experience samples were obtained from late-night REM sleep and waking, following a systematic protocol described in Kahan (2001). Results indicated that reported dreaming and waking experiences are surprisingly similar in their cognitive and sensory qualities. Concurrently, ratings of dreaming and waking experiences were markedly different on questions of general reality orientation and logical organization (e.g., the bizarreness or typicality of the events, actions, and locations). Consistent with other recent studies (e.g., Bulkeley & Kahan, 2008; Kozmová & Wolman, 2006), experiences sampled from dreaming and waking were more similar with respect to their process features than with respect to their structural features.

  3. Image quality and localization accuracy in C-arm tomosynthesis-guided head and neck surgery

    SciTech Connect

    Bachar, G.; Siewerdsen, J. H.; Daly, M. J.; Jaffray, D. A.; Irish, J. C.

    2007-12-15

    The image quality and localization accuracy for C-arm tomosynthesis and cone-beam computed tomography (CBCT) guidance of head and neck surgery were investigated. A continuum in image acquisition was explored, ranging from a single exposure (radiograph) to multiple projections acquired over a limited arc (tomosynthesis) to a full semicircular trajectory (CBCT). Experiments were performed using a prototype mobile C-arm modified to perform 3D image acquisition (a modified Siemens PowerMobil). The tradeoffs in image quality associated with the extent of the source-detector arc ({theta}{sub tot}), the number of projection views, and the total imaging dose were evaluated in phantom and cadaver studies. Surgical localization performance was evaluated using three cadaver heads imaged as a function of {theta}{sub tot}. Six localization tasks were considered, ranging from high-contrast feature identification (e.g., tip of a K-wire pointer) to more challenging soft-tissue delineation (e.g., junction of the hard and soft palate). Five head and neck surgeons and one radiologist participated as observers. For each localization task, the 3D coordinates of landmarks pinpointed by each observer were analyzed as a function of {theta}{sub tot}. For all tomosynthesis angles, image quality was highest in the coronal plane, whereas sagittal and axial planes exhibited a substantial decrease in spatial resolution associated with out-of-plane blur and distortion. Tasks involving complex, lower-contrast features demonstrated steeper degradation with smaller tomosynthetic arc. Localization accuracy in the coronal plane was correspondingly high, maintained to <3 mm down to {theta}{sub tot}{approx}30 deg. , whereas sagittal and axial localization degraded rapidly below {theta}{sub tot}{approx}60 deg. . Similarly, localization precision was better than {approx}1 mm within the coronal plane, compared to {approx}2-3 mm out-of-plane for tomosynthesis angles below {theta}{sub tot}{approx}45 deg

  4. Improving the SNO calibration accuracy for the reflective solar bands of AVHRR and MODIS

    NASA Astrophysics Data System (ADS)

    Cao, Changyong; Wu, Xiangqian; Wu, Aisheng; Xiong, Xiaoxiong

    2007-09-01

    Analyses of a 4.5 year SNO (Simultaneous Nadir Overpass) time series between AVHRR on NOAA-16 and -17 suggest that the AVHRR observations based on operational vicarious calibration have become very consistent since mid 2004. This study also suggests that the SNO method has reached a high level of relative accuracy (~1.5%, 1 sigma) for both the 0.63 and 0.84 μm bands, which outperforms many other vicarious methods for satellite radiometer calibration. Meanwhile, for AVHRR and MODIS, a 3.5 year SNO time series suggests that the SNO method has achieved a 0.9% relative accuracy (1 sigma) for the 0.63 μm band, while the relative accuracy for the 0.84 um band is on the order of +/- 5% and significantly affected by the spectral response differences between AVHRR and MODIS. Although the AVHRR observations from NOAA-16 and -17 agree well, they significantly disagree with MODIS observations according to the SNO time series. A 9% difference was found for the 0.63 μm band (estimated uncertainty of 0.9%, 1 sigma), and the difference is even larger if the spectral response differences are taken into account. Similar bias for the 0.84 μm band is also found with a larger uncertainty due to major differences in the spectral response functions between MODIS and AVHRR. It is expected that further studies with Hyperion observations at the SNOs would help us estimate the biases and uncertainty due to spectral differences between AVHRR and MODIS. It is expected that in the near future, the calibration of the AVHRR type of instruments can be made consistent through rigorous cross-calibration using the SNO method. These efforts will contribute to the generation of fundamental climate data records (FCDRs) from the nearly 30 years of AVHRR data for a variety of geophysical products including aerosol, vegetation, and surface albedo, in support of global climate change detection studies.

  5. Improving pairwise sequence alignment accuracy using near-optimal protein sequence alignments

    PubMed Central

    2010-01-01

    Background While the pairwise alignments produced by sequence similarity searches are a powerful tool for identifying homologous proteins - proteins that share a common ancestor and a similar structure; pairwise sequence alignments often fail to represent accurately the structural alignments inferred from three-dimensional coordinates. Since sequence alignment algorithms produce optimal alignments, the best structural alignments must reflect suboptimal sequence alignment scores. Thus, we have examined a range of suboptimal sequence alignments and a range of scoring parameters to understand better which sequence alignments are likely to be more structurally accurate. Results We compared near-optimal protein sequence alignments produced by the Zuker algorithm and a set of probabilistic alignments produced by the probA program with structural alignments produced by four different structure alignment algorithms. There is significant overlap between the solution spaces of structural alignments and both the near-optimal sequence alignments produced by commonly used scoring parameters for sequences that share significant sequence similarity (E-values < 10-5) and the ensemble of probA alignments. We constructed a logistic regression model incorporating three input variables derived from sets of near-optimal alignments: robustness, edge frequency, and maximum bits-per-position. A ROC analysis shows that this model more accurately classifies amino acid pairs (edges in the alignment path graph) according to the likelihood of appearance in structural alignments than the robustness score alone. We investigated various trimming protocols for removing incorrect edges from the optimal sequence alignment; the most effective protocol is to remove matches from the semi-global optimal alignment that are outside the boundaries of the local alignment, although trimming according to the model-generated probabilities achieves a similar level of improvement. The model can also be used to

  6. Quantifying the similarity of seismic polarizations

    NASA Astrophysics Data System (ADS)

    Jones, Joshua P.; Eaton, David W.; Caffagni, Enrico

    2016-02-01

    Assessing the similarities of seismic attributes can help identify tremor, low signal-to-noise (S/N) signals and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values or known seismic sources. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in S/N ratio. Measuring polarization similarity allows easy identification of site noise and sensor misalignment and can help identify coherent noise and emergent or low S/N phase arrivals. Dissimilar azimuths during phase arrivals indicate misaligned horizontal components, dissimilar incidence angles during phase arrivals indicate misaligned vertical components and dissimilar linear polarization may indicate a secondary noise source. Using records of the Mw = 8.3 Sea of Okhotsk earthquake, from Canadian National Seismic Network broad-band sensors in British Columbia and Yukon Territory, Canada, and a vertical borehole array at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Time-frequency polarization similarities of borehole data suggest that a coherent noise source may have persisted above 8 Hz several months after peak resource extraction from a `flowback' type hydraulic fracture.

  7. Racial and gender science achievement gaps in secondary education.

    PubMed

    Bacharach, Verne R; Baumeister, Alfred A; Furr, R Michael

    2003-03-01

    A substantial disparity exists for academic achievement in science between Black and White primary-school children. A similar gap exists between boys and girls. The extent to which secondary education influences these achievement gaps has not been established. The authors report analyses showing how these science achievement gaps change as a function of secondary education. Analyses of data from a large, nationally representative longitudinal study of academic achievement showed that racial disparities and disparities associated with gender continue to increase throughout high school.

  8. Molecular fingerprint similarity search in virtual screening.

    PubMed

    Cereto-Massagué, Adrià; Ojeda, María José; Valls, Cristina; Mulero, Miquel; Garcia-Vallvé, Santiago; Pujadas, Gerard

    2015-01-01

    Molecular fingerprints have been used for a long time now in drug discovery and virtual screening. Their ease of use (requiring little to no configuration) and the speed at which substructure and similarity searches can be performed with them - paired with a virtual screening performance similar to other more complex methods - is the reason for their popularity. However, there are many types of fingerprints, each representing a different aspect of the molecule, which can greatly affect search performance. This review focuses on commonly used fingerprint algorithms, their usage in virtual screening, and the software packages and online tools that provide these algorithms.

  9. Some more similarities between Peirce and Skinner

    PubMed Central

    Moxley, Roy A.

    2002-01-01

    C. S. Peirce is noted for pioneering a variety of views, and the case is made here for the similarities and parallels between his views and B. F. Skinner's radical behaviorism. In addition to parallels previously noted, these similarities include an advancement of experimental science, a behavioral psychology, a shift from nominalism to realism, an opposition to positivism, a selectionist account for strengthening behavior, the importance of a community of selves, a recursive approach to method, and the probabilistic nature of truth. Questions are raised as to the extent to which Skinner's radical behaviorism, as distinguished from his S-R positivism, may be seen as an extension of Peirce's pragmatism. PMID:22478387

  10. The collagenous gastroenteritides: similarities and differences.

    PubMed

    Gopal, Purva; McKenna, Barbara J

    2010-10-01

    Collagenous gastritis, collagenous sprue, and collagenous colitis share striking histologic similarities and occur together in some patients. They also share some drug and disease associations. Pediatric cases of collagenous gastritis, however, lack most of these associations. The etiologies of the collagenous gastroenteritides are not known, so it is not clear whether they are similar because they share pathogeneses, or because they indicate a common histologic response to varying injuries. The features, disease and drug associations, and the inquiries into the pathogenesis of these disorders are reviewed.

  11. Similarity Based Semantic Web Service Match

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Niu, Wenjia; Huang, Ronghuai

    Semantic web service discovery aims at returning the most matching advertised services to the service requester by comparing the semantic of the request service with an advertised service. The semantic of a web service are described in terms of inputs, outputs, preconditions and results in Ontology Web Language for Service (OWL-S) which formalized by W3C. In this paper we proposed an algorithm to calculate the semantic similarity of two services by weighted averaging their inputs and outputs similarities. Case study and applications show the effectiveness of our algorithm in service match.

  12. The Role of Visual Mental Imagery in the Speed-Accuracy Tradeoff: A Preliminary Investigation.

    ERIC Educational Resources Information Center

    Hodes, Carol L.

    This study investigates the relationship between speed of recognition and accuracy of the responses when visual mental imagery is controlled through imagery instructions. The procedure was to compare the achievement of learners where the independent variable was imagery instructions. The subjects were two 20-person groups of undergraduates from a…

  13. Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services

    ERIC Educational Resources Information Center

    Wang, Guoquan

    2013-01-01

    High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…

  14. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  15. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  16. Camera Sensor Arrangement for Crop/Weed Detection Accuracy in Agronomic Images

    PubMed Central

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-01-01

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects. PMID:23549361

  17. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  18. Nuclear markers reveal that inter-lake cichlids' similar morphologies do not reflect similar genealogy.

    PubMed

    Kassam, Daud; Seki, Shingo; Horic, Michio; Yamaoka, Kosaku

    2006-08-01

    The apparent inter-lake morphological similarity among East African Great Lakes' cichlid species/genera has left evolutionary biologists asking whether such similarity is due to sharing of common ancestor or mere convergent evolution. In order to answer such question, we first used Geometric Morphometrics, GM, to quantify morphological similarity and then subsequently used Amplified Fragment Length Polymorphism, AFLP, to determine if similar morphologies imply shared ancestry or convergent evolution. GM revealed that not all presumed morphological similar pairs were indeed similar, and the dendrogram generated from AFLP data indicated distinct clusters corresponding to each lake and not inter-lake morphological similar pairs. Such results imply that the morphological similarity is due to convergent evolution and not shared ancestry. The congruency of GM and AFLP generated dendrograms imply that GM is capable of picking up phylogenetic signal, and thus GM can be potential tool in phylogenetic systematics.

  19. Second International Diagnostic Accuracy Study for the Serological Detection of West Nile Virus Infection

    PubMed Central

    Papa, Anna; Sambri, Vittorio; Teichmann, Anette; Niedrig, Matthias

    2013-01-01

    Background In recent decades, sporadic cases and outbreaks in humans of West Nile virus (WNV) infection have increased. Serological diagnosis of WNV infection can be performed by enzyme-linked immunosorbent assay (ELISA), immunofluorescence assay (IFA) neutralization test (NT) and by hemagglutination-inhibition assay. The aim of this study is to collect updated information regarding the performance accuracy of WNV serological diagnostics. Methodology/Principal findings In 2011, the European Network for the Diagnostics of Imported Viral Diseases-Collaborative Laboratory Response Network (ENIVD-CLRN) organized the second external quality assurance (EQA) study for the serological diagnosis of WNV infection. A serum panel of 13 samples (included sera reactive against WNV, plus specificity and negative controls) was sent to 48 laboratories involved in WNV diagnostics. Forty-seven of 48 laboratories from 30 countries participated in the study. Eight laboratories achieved 100% of concurrent and correct results. The main obstacle in other laboratories to achieving similar performances was the cross-reactivity of antibodies amongst heterologous flaviviruses. No differences were observed in performances of in-house and commercial test used by the laboratories. IFA was significantly more specific compared to ELISA in detecting IgG antibodies. The overall analytical sensitivity and specificity of diagnostic tests for IgM detection were 50% and 95%, respectively. In comparison, the overall sensitivity and specificity of diagnostic tests for IgG detection were 86% and 69%, respectively. Conclusions/Significance This EQA study demonstrates that there is still need to improve serological tests for WNV diagnosis. The low sensitivity of IgM detection suggests that there is a risk of overlooking WNV acute infections, whereas the low specificity for IgG detection demonstrates a high level of cross-reactivity with heterologous flaviviruses. PMID:23638205

  20. Great Apes' Capacities to Recognize Relational Similarity

    ERIC Educational Resources Information Center

    Haun, Daniel B. M.; Call, Josep

    2009-01-01

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding…

  1. Mental Institutions and Similar Phenomena Called Schools

    ERIC Educational Resources Information Center

    Fischer, Ronald W.

    1971-01-01

    Mental institutions and public schools appear to have many similarities, and they often operate in ways that would seem contradictory to their philosophy. This article explores certain atrocities to the self" that result from programs that are intended to be beneficial but, in reality, often result in dehumanization. (Author)

  2. The Case of the Similar Trees.

    ERIC Educational Resources Information Center

    Meyer, Rochelle Wilson

    1982-01-01

    A possible logical flaw based on similar triangles is discussed with the Sherlock Holmes mystery, "The Muskgrave Ritual." The possible flaw has to do with the need for two trees to have equal growth rates over a 250-year period in order for the solution presented to work. (MP)

  3. Recognizing Similarities between Fraction Word Problems.

    ERIC Educational Resources Information Center

    Hardiman, Pamela Thibodeau

    Deciding how to approach a word problem for solution is a critical stage of problem solving, and is the stage which frequently presents considerable difficulty for novices. Do novices use the same information that experts do in deciding that two problems would be solved similarly? This set of four studies indicates that novices rely more on…

  4. Similarities in Aegyptopithecus and Afropithecus facial morphology.

    PubMed

    Leakey, M G; Leakey, R E; Richtsmeier, J T; Simons, E L; Walker, A C

    1991-01-01

    Recently discovered cranial fossils from the Oligocene deposits of the Fayum depression in Egypt provide many details of the facial morphology of Aegyptopithecus zeuxis. Similar features are found in the Miocene hominoid Afropithecus turkanensis. Their presence is the first good evidence of a strong phenetic link between the Oligocene and Miocene hominoids of Africa. A comparison of trait lists emphasizes the similarities of the two fossil species, and leads us to conclude that the two fossil genera share many primitive facial features. In addition, we studied facial morphology using finite-element scaling analysis and found that the two genera show similarities in morphological integration, or the way in which biological landmarks relate to one another in three dimensions to define the form of the organism. Size differences between the two genera are much greater than the relatively minor shape differences. Analysis of variability in landmark location among the four Aegyptopithecus specimens indicates that variability within the sample is not different from that found within two samples of modern macaques. We propose that the shape differences found among the four Aegyptopithecus specimens simply reflect individual variation in facial characteristics, and that the similarities in facial morphology between Aegyptopithecus and Afropithecus probably represent a complex of primitive facial features retained over millions of years.

  5. Predicting spatial similarity of freshwater fish biodiversity

    PubMed Central

    Azaele, Sandro; Muneepeerakul, Rachata; Maritan, Amos; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2009-01-01

    A major issue in modern ecology is to understand how ecological complexity at broad scales is regulated by mechanisms operating at the organismic level. What specific underlying processes are essential for a macroecological pattern to emerge? Here, we analyze the analytical predictions of a general model suitable for describing the spatial biodiversity similarity in river ecosystems, and benchmark them against the empirical occurrence data of freshwater fish species collected in the Mississippi–Missouri river system. Encapsulating immigration, emigration, and stochastic noise, and without resorting to species abundance data, the model is able to reproduce the observed probability distribution of the Jaccard similarity index at any given distance. In addition to providing an excellent agreement with the empirical data, this approach accounts for heterogeneities of different subbasins, suggesting a strong dependence of biodiversity similarity on their respective climates. Strikingly, the model can also predict the actual probability distribution of the Jaccard similarity index for any distance when considering just a relatively small sample. The proposed framework supports the notion that simplified macroecological models are capable of predicting fundamental patterns—a theme at the heart of modern community ecology. PMID:19359481

  6. Similarity of Science Textbooks: A Content Analysis

    ERIC Educational Resources Information Center

    Yost, Michael

    1973-01-01

    Studied the similarity of the astronomy portion in five science textbooks at the fourth through sixth grade levels by comparing students' responses to text authors' requirements. Concluded that the texts had more in common across grade levels than within grade levels. (CC)

  7. Cognitive Similarity in Normal and Schizogenic Families.

    ERIC Educational Resources Information Center

    Leibowitz, Gerald

    The basic purpose of this study was to measure cognitive similarity, and to test the hypothesis that the cognitive organization of a child (normal or schizophrenic) is more like that of his own parents than it is like that of randomly chosen, unrelated adults. Thirty-six matched family triads, half with sons hospitalized for a schizophrenic…

  8. [Combination similarity algorithm on chromatographic fingerprints].

    PubMed

    Zhan, Xueyan; Shi, Xinyuan; Duan, Tianxuan; Li, Lei; Qiao, Yanjiang

    2010-11-01

    The similarity of chromatographic fingerprints is one of the effective approaches evaluating the quality stability of Chinese medicine, and the cosine of angle plays an important role in the application of similarity. However, the cosine approach is insensitive to the data difference when the distribution range of the data sets is wide. When the data proportion of the reference sample and the test sample is greatly different, it confirms that the sensitivity of the cosine to the differences of the peaks owned by both the reference sample and the test sample differs from the peaks owned only by the reference sample or the test sample in this study. The method considers the peaks owned by one sample in addition to peaks owned by both samples, and determines their own appropriate weigh targeting for the maximal homostasis value of proportion among the peaks of all of Smilax glabra Roxb. samples. The method based on sample data could reflect the difference in the chemical composition area ratio between the reference sample and test samples sensitively, and measures the similarity among the nine Smilax glabra Roxb. samples, which is a new similarity algorithm for evaluating the quality stability of herbal medicines.

  9. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    PubMed Central

    Chao, Lidia S.

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402

  10. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  11. Parametric Characterization of SGP4 Theory and TLE Positional Accuracy

    NASA Astrophysics Data System (ADS)

    Oltrogge, D.; Ramrath, J.

    2014-09-01

    Two-Line Elements, or TLEs, contain mean element state vectors compatible with General Perturbations (GP) singly-averaged semi-analytic orbit theory. This theory, embodied in the SGP4 orbit propagator, provides sufficient accuracy for some (but perhaps not all) orbit operations and SSA tasks. For more demanding tasks, higher accuracy orbit and force model approaches (i.e. Special Perturbations numerical integration or SP) may be required. In recent times, the suitability of TLEs or GP theory for any SSA analysis has been increasingly questioned. Meanwhile, SP is touted as being of high quality and well-suited for most, if not all, SSA applications. Yet the lack of truth or well-known reference orbits that haven't already been adopted for radar and optical sensor network calibration has typically prevented a truly unbiased assessment of such assertions. To gain better insight into the practical limits of applicability for TLEs, SGP4 and the underlying GP theory, the native SGP4 accuracy is parametrically examined for the statistically-significant range of RSO orbit inclinations experienced as a function of all orbit altitudes from LEO through GEO disposal altitude. For each orbit altitude, reference or truth orbits were generated using full force modeling, time-varying space weather, and AGIs HPOP numerical integration orbit propagator. Then, TLEs were optimally fit to these truth orbits. The resulting TLEs were then propagated and positionally differenced with the truth orbits to determine how well the GP theory was able to fit the truth orbits. Resultant statistics characterizing these empirically-derived accuracies are provided. This TLE fit process of truth orbits was intentionally designed to be similar to the JSpOC process operationally used to generate Enhanced GP TLEs for debris objects. This allows us to draw additional conclusions of the expected accuracies of EGP TLEs. In the real world, Orbit Determination (OD) programs aren't provided with dense optical

  12. Quantifying Visual Similarity in Clinical Iconic Graphics

    PubMed Central

    Payne, Philip R.O.; Starren, Justin B.

    2005-01-01

    Objective: The use of icons and other graphical components in user interfaces has become nearly ubiquitous. The interpretation of such icons is based on the assumption that different users perceive the shapes similarly. At the most basic level, different users must agree on which shapes are similar and which are different. If this similarity can be measured, it may be usable as the basis to design better icons. Design: The purpose of this study was to evaluate a novel method for categorizing the visual similarity of graphical primitives, called Presentation Discovery, in the domain of mammography. Six domain experts were given 50 common textual mammography findings and asked to draw how they would represent those findings graphically. Nondomain experts sorted the resulting graphics into groups based on their visual characteristics. The resulting groups were then analyzed using traditional statistics and hypothesis discovery tools. Strength of agreement was evaluated using computational simulations of sorting behavior. Measurements: Sorter agreement was measured at both the individual graphical and concept-group levels using a novel simulation-based method. “Consensus clusters” of graphics were derived using a hierarchical clustering algorithm. Results: The multiple sorters were able to reliably group graphics into similar groups that strongly correlated with underlying domain concepts. Visual inspection of the resulting consensus clusters indicated that graphical primitives that could be informative in the design of icons were present. Conclusion: The method described provides a rigorous alternative to intuitive design processes frequently employed in the design of icons and other graphical interface components. PMID:15684136

  13. Perceptual tests of rhythmic similarity: II. Syllable rhythm.

    PubMed

    Kim, Jeesun; Davis, Chris; Cutler, Anne

    2008-01-01

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable- and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.

  14. THE EFFECTS OF TEACHER-STUDENT SIMILARITY IN AN EDUCATIONAL SKILLS COURSE.

    ERIC Educational Resources Information Center

    FRETZ, BRUCE B.; SCHMIDT, LYLE D.

    THE INTERACTION OF TEACHER-STUDENT CHARACTERISTICS AND NEEDS AS RELATED TO STUDENTS' ACHIEVEMENT, IMPROVEMENT, AND SATISFACTION WAS EXPLORED DURING THE REPORTED RESEARCH. THE FOLLOWING HYPOTHESES WERE TESTED--(1) THOSE STUDENTS WHOSE MEASURED CHARACTERISTICS ARE MOST SIMILAR TO THOSE OF THE INSTRUCTOR OBTAIN THE HIGHEST ACHIEVEMENT RATIOS, (2)…

  15. Using Transponders on the Moon to Increase Accuracy of GPS

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Chui, Talso

    2008-01-01

    It has been proposed to place laser or radio transponders at suitably chosen locations on the Moon to increase the accuracy achievable using the Global Positioning System (GPS) or other satellite-based positioning system. The accuracy of GPS position measurements depends on the accuracy of determination of the ephemerides of the GPS satellites. These ephemerides are determined by means of ranging to and from Earth-based stations and consistency checks among the satellites. Unfortunately, ranging to and from Earth is subject to errors caused by atmospheric effects, notably including unpredictable variations in refraction. The proposal is based on exploitation of the fact that ranging between a GPS satellite and another object outside the atmosphere is not subject to error-inducing atmospheric effects. The Moon is such an object and is a convenient place for a ranging station. The ephemeris of the Moon is well known and, unlike a GPS satellite, the Moon is massive enough that its orbit is not measurably affected by the solar wind and solar radiation. According to the proposal, each GPS satellite would repeatedly send a short laser or radio pulse toward the Moon and the transponder(s) would respond by sending back a pulse and delay information. The GPS satellite could then compute its distance from the known position(s) of the transponder(s) on the Moon. Because the same hemisphere of the Moon faces the Earth continuously, any transponders placed there would remain continuously or nearly continuously accessible to GPS satellites, and so only a relatively small number of transponders would be needed to provide continuous coverage. Assuming that the transponders would depend on solar power, it would be desirable to use at least two transponders, placed at diametrically opposite points on the edges of the Moon disk as seen from Earth, so that all or most of the time, at least one of them would be in sunlight.

  16. Spatial augmented reality based high accuracy human face projection

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Jinghui; Li, Yufeng; Weng, Dongdong; Liu, Yue

    2015-08-01

    This paper discusses the imaging principles and the technical difficulties of spatial augmented reality based human face projection. A novel geometry correction method is proposed to realize fast, high-accuracy face model projection. Using a depth camera to reconstruct the projected object, the relative position from the rendered model to the projector can be accessed and the initial projection image is generated. Then the projected image is distorted by using Bezier interpolation to guarantee that the projected texture matches with the object surface. The proposed method is under a simple process flow and can achieve high perception registration of virtual and real object. In addition, this method has a good performance in the condition that the reconstructed model is not exactly same with the rendered virtual model which extends its application area in the spatial augmented reality based human face projection.

  17. Researching the technology of high-accuracy camshaft measurement

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Chen, Yong-Le; Wang, Hong; Liao, Hai-Yang

    1996-10-01

    This paper states the cam's data processing algorithm in detail in high accurate camshaft measurement system. It contains: 1) using minimum error of curve symmetry to seek the center position of the key slot; 2) Calculating the minimum error by cam's curve in theory to search top area; 3) According to cam's tolerance E(i) function and minimum angle error at cam top, seeking the best position of cam top and getting the best angle value and error curve. The algorithm is suitable for measuring all kinds of symmetry or asymmetry cam, and plain push-rod or spherical push-rod cam, for example, bus camshaft, car camshaft, motor camshaft, etc. Using the algorithm, high accuracy measurement can be achieved.

  18. On the Accuracy Potential in Underwater/Multimedia Photogrammetry.

    PubMed

    Maas, Hans-Gerd

    2015-07-24

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell's Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions.

  19. Evaluation of DEM generation accuracy from UAS imagery

    NASA Astrophysics Data System (ADS)

    Santise, M.; Fornari, M.; Forlani, G.; Roncella, R.

    2014-06-01

    The growing use of UAS platform for aerial photogrammetry comes with a new family of Computer Vision highly automated processing software expressly built to manage the peculiar characteristics of these blocks of images. It is of interest to photogrammetrist and professionals, therefore, to find out whether the image orientation and DSM generation methods implemented in such software are reliable and the DSMs and orthophotos are accurate. On a more general basis, it is interesting to figure out whether it is still worth applying the standard rules of aerial photogrammetry to the case of drones, achieving the same inner strength and the same accuracies as well. With such goals in mind, a test area has been set up at the University Campus in Parma. A large number of ground points has been measured on natural as well as signalized points, to provide a comprehensive test field, to check the accuracy performance of different UAS systems. In the test area, points both at ground-level and features on the buildings roofs were measured, in order to obtain a distributed support also altimetrically. Control points were set on different types of surfaces (buildings, asphalt, target, fields of grass and bumps); break lines, were also employed. The paper presents the results of a comparison between two different surveys for DEM (Digital Elevation Model) generation, performed at 70 m and 140 m flying height, using a Falcon 8 UAS.

  20. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  1. Prospective memory mediated by interoceptive accuracy: a psychophysiological approach

    PubMed Central

    Tochizawa, Saiko; Shibata, Midori; Terasawa, Yuri

    2016-01-01

    Previous studies on prospective memory (PM), defined as memory for future intentions, suggest that psychological stress enhances successful PM retrieval. However, the mechanisms underlying this notion remain poorly understood. We hypothesized that PM retrieval is achieved through interaction with autonomic nervous activity, which is mediated by the individual accuracy of interoceptive awareness, as measured by the heartbeat detection task. In this study, the relationship between cardiac reactivity and retrieval of delayed intentions was evaluated using the event-based PM task. Participants were required to detect PM target letters while engaged in an ongoing 2-back working memory task. The results demonstrated that individuals with higher PM task performance had a greater increase in heart rate on PM target presentation. Also, higher interoceptive perceivers showed better PM task performance. This pattern was not observed for working memory task performance. These findings suggest that cardiac afferent signals enhance PM retrieval, which is mediated by individual levels of interoceptive accuracy. This article is part of the themed issue ‘Interoception beyond homeostasis: affect, cognition and mental health’. PMID:28080964

  2. Generalized and Heuristic-Free Feature Construction for Improved Accuracy

    PubMed Central

    Fan, Wei; Zhong, Erheng; Peng, Jing; Verscheure, Olivier; Zhang, Kun; Ren, Jiangtao; Yan, Rong; Yang, Qiang

    2010-01-01

    State-of-the-art learning algorithms accept data in feature vector format as input. Examples belonging to different classes may not always be easy to separate in the original feature space. One may ask: can transformation of existing features into new space reveal significant discriminative information not obvious in the original space? Since there can be infinite number of ways to extend features, it is impractical to first enumerate and then perform feature selection. Second, evaluation of discriminative power on the complete dataset is not always optimal. This is because features highly discriminative on subset of examples may not necessarily be significant when evaluated on the entire dataset. Third, feature construction ought to be automated and general, such that, it doesn't require domain knowledge and its improved accuracy maintains over a large number of classification algorithms. In this paper, we propose a framework to address these problems through the following steps: (1) divide-conquer to avoid exhaustive enumeration; (2) local feature construction and evaluation within subspaces of examples where local error is still high and constructed features thus far still do not predict well; (3) weighting rules based search that is domain knowledge free and has provable performance guarantee. Empirical studies indicate that significant improvement (as much as 9% in accuracy and 28% in AUC) is achieved using the newly constructed features over a variety of inductive learners evaluated against a number of balanced, skewed and high-dimensional datasets. Software and datasets are available from the authors. PMID:21544257

  3. Climate Change Observation Accuracy: Requirements and Economic Value

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce; Cooke, Roger; Golub, Alexander; Baize, Rosemary; Mlynczak, Martin; Lukashin, Constantin; Thome, Kurt; Shea, Yolanda; Kopp, Greg; Pilewskie, Peter; Revercomb, Henry; Best, Fred

    2016-01-01

    This presentation will summarize a new quantitative approach to determining the required accuracy for climate change observations. Using this metric, most current global satellite observations struggle to meet this accuracy level. CLARREO (Climate Absolute Radiance and Refractivity Observatory) is a new satellite mission designed to resolve this challenge is by achieving advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra. The CLARREO spectrometers can serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, etc). A CLARREO Pathfinder mission for flight on the International Space Station is included in the U.S. Presidentâ€"TM"s fiscal year 2016 budget, with launch in 2019 or 2020. Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A new study has been carried out to quantify the economic benefits of such an advance and concludes that the economic value is $9 Trillion U.S. dollars. The new value includes the cost of carbon emissions reductions.

  4. Applications for high-accuracy digital ionosonde data

    SciTech Connect

    Paul, A.K.

    1990-05-03

    The new technology used in modern digital ionosondes permits the measurement of traditional (virtual heights and amplitude of echoes) and new (radio phase of echoes) ionospheric data with very high precision. Consequently, higher accuracy for standard ionospheric parameters can be achieved and new types of parameters can be obtained using new processing methods. Details of such data analysis programs may depend on the type of digital ionosonde used; however, the basic physical principles involved are the same. For example, there is no doubt that the change of the radio phase with time is proportional to the Doppler frequency of the echo. In recent years much effort has gone into modeling of the ionosphere. Unfortunately the spatial and the temporal resolution of the most basic parameters of the data base for testing such models is inadequate. For example, it appears that in some areas (e.g., Europe) the spatial resolution of the F-layer maximum electron density may be sufficient, but this is not true for the height of the maximum and the half-thickness of the F-layer, since very few station computed electron density profiles from the recorded ionograms. In the following we will outline a new procedure for computing F-layer profile parameters. The process is simple and its routine application could significantly improve the data base. The accuracy limits of the resulting parameters will be discussed together with some other important ionospheric quantities observable with digital ionosondes.

  5. On the Accuracy Potential in Underwater/Multimedia Photogrammetry

    PubMed Central

    Maas, Hans-Gerd

    2015-01-01

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell’s Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions. PMID:26213942

  6. Does DFT-SAPT method provide spectroscopic accuracy?

    SciTech Connect

    Shirkov, Leonid; Makarewicz, Jan

    2015-02-14

    Ground state potential energy curves for homonuclear and heteronuclear dimers consisting of noble gas atoms from He to Kr were calculated within the symmetry adapted perturbation theory based on the density functional theory (DFT-SAPT). These potentials together with spectroscopic data derived from them were compared to previous high-precision coupled cluster with singles and doubles including the connected triples theory calculations (or better if available) as well as to experimental data used as the benchmark. The impact of midbond functions on DFT-SAPT results was tested to study the convergence of the interaction energies. It was shown that, for most of the complexes, DFT-SAPT potential calculated at the complete basis set (CBS) limit is lower than the corresponding benchmark potential in the region near its minimum and hence, spectroscopic accuracy cannot be achieved. The influence of the residual term δ(HF) on the interaction energy was also studied. As a result, we have found that this term improves the agreement with the benchmark in the repulsive region for the dimers considered, but leads to even larger overestimation of potential depth D{sub e}. Although the standard hybrid exchange-correlation (xc) functionals with asymptotic correction within the second order DFT-SAPT do not provide the spectroscopic accuracy at the CBS limit, it is possible to adjust empirically basis sets yielding highly accurate results.

  7. Optimal diving maneuver strategy considering guidance accuracy for hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Zhu, Jianwen; Liu, Luhua; Tang, Guojian; Bao, Weimin

    2014-11-01

    An optimal maneuver strategy considering terminal guidance accuracy for hypersonic vehicle in dive phase is investigated in this paper. First, it derives the complete three-dimensional nonlinear coupled motion equation without any approximations based on diving relative motion relationship directly, and converts it into linear decoupled state space equation with the same relative degree by feedback linearization. Second, the diving guidance law is designed based on the decoupled equation to meet the terminal impact point and falling angle constraints. In order to further improve the interception capability, it constructs maneuver control model through adding maneuver control item to the guidance law. Then, an integrated performance index consisting of maximum line-of-sight angle rate and minimum energy consumption is designed, and optimal control is employed to obtain optimal maneuver strategy when the encounter time is determined and undetermined, respectively. Furthermore, the performance index and suboptimal strategy are reconstructed to deal with the control capability constraint and the serous influence on terminal guidance accuracy caused by maneuvering flight. Finally, the approach is tested using the Common Aero Vehicle-H model. Simulation results demonstrate that the proposed strategy can achieve high precision guidance and effective maneuver at the same time, and the indices are also optimized.

  8. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  9. Response time accuracy in Apple Macintosh computers.

    PubMed

    Neath, Ian; Earle, Avery; Hallett, Darcy; Surprenant, Aimée M

    2011-06-01

    The accuracy and variability of response times (RTs) collected on stock Apple Macintosh computers using USB keyboards was assessed. A photodiode detected a change in the screen's luminosity and triggered a solenoid that pressed a key on the keyboard. The RTs collected in this way were reliable, but could be as much as 100 ms too long. The standard deviation of the measured RTs varied between 2.5 and 10 ms, and the distributions approximated a normal distribution. Surprisingly, two recent Apple-branded USB keyboards differed in their accuracy by as much as 20 ms. The most accurate RTs were collected when an external CRT was used to display the stimuli and Psychtoolbox was able to synchronize presentation with the screen refresh. We conclude that RTs collected on stock iMacs can detect a difference as small as 5-10 ms under realistic conditions, and this dictates which types of research should or should not use these systems.

  10. Accuracy of forecasts in strategic intelligence.

    PubMed

    Mandel, David R; Barnes, Alan

    2014-07-29

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4-0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement.

  11. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  12. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  13. Childhood vaccination: achievements and challenges.

    PubMed

    Ndumbe, P

    1996-09-01

    As the goal of eradicating smallpox was being met, the World Health Organization created its Expanded Programme on Immunisation (EPI) in 1974 and reached its initial goal of achieving full vaccination of 80% of the world's children by 1990. This effort was aided by the creation of "cold chain" delivery systems and resulted in the annual saving of 3.5 million children in less-developed countries. Current EPI vaccination goals include 1) eradication of poliomyelitis by the year 2000, 2) elimination of neonatal tetanus by the year 1995, 3) control of measles and hepatitis B, and 4) immunization of 90% of the world's children 1 year or younger by the year 2000. Goals of the Children's Vaccine Initiative (formed in 1991) include 1) provision of an adequate supply of affordable, safe, and effective vaccines; 2) production of improved and new vaccines; and 3) simplification of the logistics of vaccine delivery. Future challenges are to sustain high vaccination coverage, reach the unreached, achieve proper storage of vaccines and reduce waste, integrate new vaccines into national programs, and achieve vaccine self-sufficiency. The fact that these challenges will be difficult to achieve is illustrated by the situation in Africa where the high immunization levels achieved in 1990 have dropped dramatically. Those who must act to implement immunization programs are health personnel, families, governments, and development partners. In order to achieve equity in health, every child must be reached, governments must be made accountable for programs, health workers must convince families of the importance of vaccination, delivery systems must be in place to take advantage of the new vaccines being delivered, and a multisectoral approach must be taken to assure sustainability.

  14. Accuracy estimation for supervised learning algorithms

    SciTech Connect

    Glover, C.W.; Oblow, E.M.; Rao, N.S.V.

    1997-04-01

    This paper illustrates the relative merits of three methods - k-fold Cross Validation, Error Bounds, and Incremental Halting Test - to estimate the accuracy of a supervised learning algorithm. For each of the three methods we point out the problem they address, some of the important assumptions that are based on, and illustrate them through an example. Finally, we discuss the relative advantages and disadvantages of each method.

  15. Boresighting Issues for High Accuracy TSPI Sensors

    DTIC Science & Technology

    2015-04-29

    UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) 412 RANS/ENRT Edwards Air Force Base, CA...navigation sensor to accurately report the heading, roll, and pitch of an aircraft, the angular offset between the sensor inertial coordinate system and...the aircraft coordinate system, the “boresight”, must be measured . Errors in the boresight measurements directly affect the accuracy of the navigation

  16. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  17. Secure Fingerprint Identification of High Accuracy

    DTIC Science & Technology

    2014-01-01

    work on secure face recognition ([12], [29] and others), DNA matching ([35], [6], and others), iris code comparisons ([9], [7]), fingerprint ...1 Secure Fingerprint Identification of High Accuracy Marina Blanton and Siddharth Saraph Department of Computer Science and Engineering University of...In this work, we treat the problem of privacy- preserving matching of two fingerprints , which can be used for secure fingerprint authentication and

  18. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  19. Similarity indices based on link weight assignment for link prediction of unweighted complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Shuxin; Ji, Xinsheng; Liu, Caixia; Bai, Yi

    2017-01-01

    Many link prediction methods have been proposed for predicting the likelihood that a link exists between two nodes in complex networks. Among these methods, similarity indices are receiving close attention. Most similarity-based methods assume that the contribution of links with different topological structures is the same in the similarity calculations. This paper proposes a local weighted method, which weights the strength of connection between each pair of nodes. Based on the local weighted method, six local weighted similarity indices extended from unweighted similarity indices (including Common Neighbor (CN), Adamic-Adar (AA), Resource Allocation (RA), Salton, Jaccard and Local Path (LP) index) are proposed. Empirical study has shown that the local weighted method can significantly improve the prediction accuracy of these unweighted similarity indices and that in sparse and weakly clustered networks, the indices perform even better.

  20. Speed versus accuracy in collective decision making.

    PubMed Central

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-01-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed. PMID:14667335