Science.gov

Sample records for achieve similar accuracies

  1. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  2. Achieving seventh-order amplitude accuracy in leapfrog integrations

    NASA Astrophysics Data System (ADS)

    Williams, Paul

    2015-04-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of weather and climate. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1,-4,6,-4,1) filter instead of a (1,-2,1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models. Reference Williams PD (2013) Achieving seventh-order amplitude accuracy in leapfrog integrations. Monthly Weather Review 141(9), pp 3037-3051. DOI: 10.1175/MWR-D-12-00303.1

  3. Different clinical electrodes achieve similar electrical nerve conduction block

    NASA Astrophysics Data System (ADS)

    Boger, Adam; Bhadra, Narendra; Gustafson, Kenneth J.

    2013-10-01

    Objective. We aim to evaluate the suitability of four electrodes previously used in clinical experiments for peripheral nerve electrical block applications. Approach. We evaluated peripheral nerve electrical block using three such clinical nerve cuff electrodes (the Huntington helix, the Case self-sizing Spiral and the flat interface nerve electrode) and one clinical intramuscular electrode (the Memberg electrode) in five cats. Amplitude thresholds for the block using 12 or 25 kHz voltage-controlled stimulation, onset response, and stimulation thresholds before and after block testing were determined. Main results. Complete nerve block was achieved reliably and the onset response to blocking stimulation was similar for all electrodes. Amplitude thresholds for the block were lowest for the Case Spiral electrode (4 ± 1 Vpp) and lower for the nerve cuff electrodes (7 ± 3 Vpp) than for the intramuscular electrode (26 ± 10 Vpp). A minor elevation in stimulation threshold and reduction in stimulus-evoked urethral pressure was observed during testing, but the effect was temporary and did not vary between electrodes. Significance. Multiple clinical electrodes appear suitable for neuroprostheses using peripheral nerve electrical block. The freedom to choose electrodes based on secondary criteria such as ease of implantation or cost should ease translation of electrical nerve block to clinical practice.

  4. Accuracy required and achievable in radiotherapy dosimetry: have modern technology and techniques changed our views?

    NASA Astrophysics Data System (ADS)

    Thwaites, David

    2013-06-01

    In this review of the accuracy required and achievable in radiotherapy dosimetry, older approaches and evidence-based estimates for 3DCRT have been reprised, summarising and drawing together the author's earlier evaluations where still relevant. Available evidence for IMRT uncertainties has been reviewed, selecting information from tolerances, QA, verification measurements, in vivo dosimetry and dose delivery audits, to consider whether achievable uncertainties increase or decrease for current advanced treatments and practice. Overall there is some evidence that they tend to increase, but that similar levels should be achievable. Thus it is concluded that those earlier estimates of achievable dosimetric accuracy are still applicable, despite the changes and advances in technology and techniques. The one exception is where there is significant lung involvement, where it is likely that uncertainties have now improved due to widespread use of more accurate heterogeneity models. Geometric uncertainties have improved with the wide availability of IGRT.

  5. Creating Birds of Similar Feathers: Leveraging Similarity to Improve Teacher-Student Relationships and Academic Achievement

    ERIC Educational Resources Information Center

    Gehlbach, Hunter; Brinkworth, Maureen E.; King, Aaron M.; Hsu, Laura M.; McIntyre, Joseph; Rogers, Todd

    2016-01-01

    When people perceive themselves as similar to others, greater liking and closer relationships typically result. In the first randomized field experiment that leverages actual similarities to improve real-world relationships, we examined the affiliations between 315 9th grade students and their 25 teachers. Students in the treatment condition…

  6. Achieving Seventh-Order Amplitude Accuracy in Leapfrog Integrations

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2014-12-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of the atmosphere and ocean. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1, -4, 6, -4, 1) filter instead of a (1, -2, 1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models.

  7. Is accuracy of serum free light chain measurement achievable?

    PubMed

    Jacobs, Joannes F M; Tate, Jillian R; Merlini, Giampaolo

    2016-06-01

    The serum free light chain (FLC) assay has proven to be an important complementary test in the management of patients with monoclonal gammopathies. The serum FLC assay has value for patients with plasma cell disorders in the context of screening and diagnosis, prognostic stratification, and quantitative monitoring. Nonetheless, serum FLC measurements have analytical limitations which give rise to differences in FLC reporting depending on which FLC assay and analytical platform is used. As the FLC measurements are incorporated in the International Myeloma Working Group guidelines for the evaluation and management of plasma cell dyscrasias, this may directly affect clinical decisions. As new certified methods for serum FLC assays emerge, the need to harmonise patient FLC results becomes increasingly important. In this opinion paper we provide an overview of the current lack of accuracy and harmonisation in serum FLC measurements. The clinical consequence of non-harmonized FLC measurements is that an individual patient may or may not meet certain diagnostic, prognostic, or response criteria, depending on which FLC assay and platform is used. We further discuss whether standardisation of serum FLC measurements is feasible and provide an overview of the steps needed to be taken towards harmonisation of FLC measurements. PMID:26641970

  8. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  9. Gender and Achievement--Understanding Gender Differences and Similarities in Mathematics Assessment.

    ERIC Educational Resources Information Center

    Zhang, Liru; Manon, Jon

    The primary objective of this study was to investigate overall patterns of gender differences and similarities of test performance in mathematics. To achieve that objective, observed test scores on the Delaware standards-based assessment were analyzed to examine: (1) gender differences and similarities across grades 3, 5, 8 and 10 over 2 years;…

  10. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    SciTech Connect

    Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.

  11. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  12. The Influence of Overt Practice, Achievement Level, and Explanatory Style on Calibration Accuracy and Performance

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; O'Shea, Patrick; Allen, Dwight

    2005-01-01

    The authors measured the influence of overt calibration practice, achievement level, and explanatory style on calibration accuracy and exam performance. Students (N = 356) were randomly assigned to either an overt practice or no-practice condition. Students in the overt practice condition made predictions and postdictions about their performance…

  13. Similarity

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1990-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute for Technology (CalTech), the mathematical concept of similarity is presented. he history of and real life applications are discussed using actual film footage and computer animation. Terms used and various concepts of size, shape, ratio, area, and volume are demonstrated. The similarity of polygons, solids, congruent triangles, internal ratios, perimeters, and line segments using the previous mentioned concepts are shown.

  14. Similarity of Students' Experiences and Accuracy of Faculty and Staff Perceptions: Issues for Student Retention.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.; And Others

    Research on attrition of university students has recently examined "dropping out" as the culmination of a complex interactive process. In order to examine differences between successful students (persisters) and students who officially withdrew from a major university, and to examine the accuracy of faculty and staff perceptions of students'…

  15. Gender Differences in Academic Achievement: Is Writing an Exception to the Gender Similarities Hypothesis?

    PubMed

    Reynolds, Matthew R; Scheiber, Caroline; Hajovsky, Daniel B; Schwartz, Bryanna; Kaufman, Alan S

    2015-01-01

    The gender similarities hypothesis by J. S. Hyde ( 2005 ), based on large-scale reviews of studies, concludes that boys and girls are more alike than different on most psychological variables, including academic skills such as reading and math (J. S. Hyde, 2005 ). Writing is an academic skill that may be an exception. The authors investigated gender differences in academic achievement using a large, nationally stratified sample of children and adolescents ranging from ages 7-19 years (N = 2,027). Achievement data were from the conormed sample for the Kaufman intelligence and achievement tests. Multiple-indicator, multiple-cause, and multigroup mean and covariance structure models were used to test for mean differences. Girls had higher latent reading ability and higher scores on a test of math computation, but the effect sizes were consistent with the gender similarities hypothesis. Conversely, girls scored higher on spelling and written expression, with effect sizes inconsistent with the gender similarities hypothesis. The findings remained the same after controlling for cognitive ability. Girls outperform boys on tasks of writing. PMID:26135387

  16. Similarity from multi-dimensional scaling: solving the accuracy and diversity dilemma in information filtering.

    PubMed

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  17. Similarity from Multi-Dimensional Scaling: Solving the Accuracy and Diversity Dilemma in Information Filtering

    PubMed Central

    Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2014-01-01

    Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243

  18. Accuracy of Self-Reported College GPA: Gender-Moderated Differences by Achievement Level and Academic Self-Efficacy

    ERIC Educational Resources Information Center

    Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.

    2014-01-01

    Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…

  19. Ligand similarity guided receptor selection enhances docking accuracy and recall for non-nucleoside HIV reverse transcriptase inhibitors.

    PubMed

    Stanton, Richard A; Nettles, James H; Schinazi, Raymond F

    2015-11-01

    Non-nucleoside reverse transcriptase inhibitors (NNRTI) are allosteric inhibitors of human immunodeficiency virus type 1 (HIV-1) reverse transcriptase (RT), a viral polymerase essential to infection. Despite the availability of >150 NNRTI-bound RT crystal structures, rational design of new NNRTI remains challenging because of the variability of their induced fit, hydrophobic binding patterns. Docking NNRTI yields inconsistent results that vary markedly depending on the receptor structure used, as only 27% of the >20k cross-docking calculations we performed using known NNRTI were accurate. In order to determine if a hospitable receptor for docking could be selected a priori, we evaluated more than 40 chemical descriptors for their ability to pre-select a best receptor for NNRTI cross-docking. The receptor selection was based on similarity scores between the bound- and target-ligands generated by each descriptor. The top descriptors were able to double the probability of cross-docking accuracy over random receptor selection. Additionally, recall of known NNRTI from a large library of similar decoys was increased using the same approach. The results demonstrate the utility of pre-selecting receptors when docking into difficult targets. Graphical Abstract Cross-docking accuracy increases when using chemical descriptors to determine the NNRTI with maximum similarity to the new compound and then docking into its respective receptor. PMID:26450349

  20. Mice and rats achieve similar levels of performance in an adaptive decision-making task

    PubMed Central

    Jaramillo, Santiago; Zador, Anthony M.

    2014-01-01

    Two opposing constraints exist when choosing a model organism for studying the neural basis of adaptive decision-making: (1) experimental access and (2) behavioral complexity. Available molecular and genetic approaches for studying neural circuits in the mouse fulfill the first requirement. In contrast, it is still under debate if mice can perform cognitive tasks of sufficient complexity. Here we compare learning and performance of mice and rats, the preferred behavioral rodent model, during an acoustic flexible categorization two-alternative choice task. The task required animals to switch between two categorization definitions several times within a behavioral session. We found that both species achieved similarly high performance levels. On average, rats learned the task faster than mice, although some mice were as fast as the average rat. No major differences in subjective categorization boundaries or the speed of adaptation between the two species were found. Our results demonstrate that mice are an appropriate model for the study of the neural mechanisms underlying adaptive decision-making, and suggest they might be suitable for other cognitive tasks as well. PMID:25278849

  1. Mismatched partners that achieve postpairing behavioral similarity improve their reproductive success

    PubMed Central

    Laubu, Chloé; Dechaume-Moncharmont, François-Xavier; Motreuil, Sébastien; Schweitzer, Cécile

    2016-01-01

    Behavioral similarity between partners is likely to promote within-pair compatibility and to result in better reproductive success. Therefore, individuals are expected to choose a partner that is alike in behavioral type. However, mate searching is very costly and does not guarantee finding a matching partner. If mismatched individuals pair, they may benefit from increasing their similarity after pairing. We show in a monogamous fish species—the convict cichlid—that the behavioral similarity between mismatched partners can increase after pairing. This increase resulted from asymmetrical adjustment because only the reactive individual became more alike its proactive partner, whereas the latter did not change its behavior. The mismatched pairs that increased their similarity not only improved their reproductive success but also raised it up to the level of matched pairs. While most studies assume that assortative mating results from mate choice, our study suggests that postpairing adjustment could be an alternative explanation for the high behavioral similarity between partners observed in the field. It also explains why interindividual behavioral differences can be maintained within a given population. PMID:26973869

  2. Male Learners' Vocabulary Achievement through Concept Mapping and Mind Mapping: Differences and Similarities

    ERIC Educational Resources Information Center

    Tarkashvand, Zahra

    2015-01-01

    While learning English plays an essential role in today's life, vocabulary achievement is helpful to overcome the difficulties of commanding the language. Drawing on data from three months experimental work, this article explores how two mapping strategies affect the learning vocabularies in EFL male learners. While females were studied before,…

  3. Finland and Singapore in PISA 2009: Similarities and Differences in Achievements and School Management

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2014-01-01

    In PISA 2009, Finland and Singapore were both ranked high among the participating nations and have caught much attention internationally. However, a secondary analysis of the means for Reading achievement show that the differences are rather small and are attributable to spurious precision. Hence, the two nations should be considered as being on…

  4. Socially Oriented Motivational Goals and Academic Achievement: Similarities between Native and Anglo Americans

    ERIC Educational Resources Information Center

    Ali, Jinnat; McInerney, Dennis M.; Craven, Rhonda G.; Yeung, Alexander Seeshing; King, Ronnel B.

    2014-01-01

    The authors examined the relations between two socially oriented dimensions of student motivation and academic achievement of Native (Navajo) American and Anglo American students. Using confirmatory factor analysis, a multidimensional and hierarchical model was found to explain the relations between performance and social goals. Four first-order…

  5. Physical Attrativeness, Perceived Attitude Similarity, and Academic Achievement as Contributors to Interpersonal Attraction among Adolescents

    ERIC Educational Resources Information Center

    Cavior, Norman N.; Dokecki, Paul R.

    1973-01-01

    Fifth- and eleventh-grade males and females who knew each other ( knowers'') judged classmates' photographs on physical attractiveness, perceived attitude similarity, and interpersonal attraction. Nonknowers'' (male and female classmates in different schools in the same grades) judged the same photographs on physical attractiveness. (Editor)

  6. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  7. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    PubMed

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  8. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  9. An investigation of urban boundary layer towards achieving similarity criteria in a short wind tunnel

    NASA Astrophysics Data System (ADS)

    Yazid, A. W. M.; Che Sidik, N. A.; Mansor, S.; Rahman, A. B. Abdul; Ishak, I. S.; Dahalan, N.

    2015-12-01

    An experimental investigation was conducted in an effort to artificially thicken the boundary layer which has the characteristics of an urban boundary layer in a short wind tunnel. Turbulence grid was developed and then systematically tested to quantitatively investigate the formation of a thick and uniform boundary layer over location of interest. Investigated parameters were the mean velocity and turbulence intensity to assess the similarity of the simulated boundary layer. The measurement analysis shows that the current design provides the desired boundary layer profile under fully rough conditions at just 1.5 m downstream of the test section inlet, which allows for the building models to be placed over turntable. However, the roughness length value is high which reflects the determination of the model scale while the turbulence intensities levels is low which is not desirable characteristics of an urban boundary layer. The results presented here may facilitate future improvements in the design of the turbulence grid and measurement techniques.

  10. Accuracy of Teachers' Judgments of Students' Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Sudkamp, Anna; Kaiser, Johanna; Moller, Jens

    2012-01-01

    This meta-analysis summarizes empirical results on the correspondence between teachers' judgments of students' academic achievement and students' actual academic achievement. The article further investigates theoretically and methodologically relevant moderators of the correlation between the two measures. Overall, 75 studies reporting…

  11. Design considerations for achieving high accuracy with the SHOALS bathymetric lidar system

    NASA Astrophysics Data System (ADS)

    Guenther, Gary C.; Thomas, Robert W. L.; LaRocque, Paul E.

    1996-11-01

    The ultimate accuracy of depths from an airborne laser hydrography system depends both on careful hardware design aimed at producing the best possible accuracy and precision of recorded data, along with insensitivity to environmental effects, and on post-flight data processing software which corrects for a number of unavoidable biases and provides for flexible operator interaction to handle special cases. The generic procedure for obtaining a depth from an airborne lidar pulse involves measurement of the time between the surface return and the bottom return. In practice, because both of these return times are biased due to a number of environmental and hardware effects, it is necessary to apply various correctors in order to obtain depth estimates which are sufficiently accurate to meet International Hydrographic Office standards. Potential false targets, also of both environmental and hardware origin, must be discriminated, and wave heights must be removed. It is important to have a depth confidence value matched to accuracy and to have warnings about or automatic deletion of pulses with questionable characteristics. Techniques, procedures, and algorithms developed for the SHOALS systems are detailed here.

  12. Novel FBG interrogation technique for achieving < 100 nɛ accuracies at remote distances > 70 km

    NASA Astrophysics Data System (ADS)

    Farrell, Tom; O'Connor, Peter; Levins, John; McDonald, David

    2005-06-01

    Due to the development of Fibre Bragg Grating sensors for the measurement of temperature, strain and pressure many markets can benefit from optical technology. These markets are the oil and gas industry, structural and civil engineering, rail and aerospace to name a few. The advantages of using optical sensing technology are that high accuracy measurements can be performed with a passive optical system. By running one fibre along the structure or down the well, multiple points along the fibre can be tested to measure strain, temperature and pressure. Of importance with these systems is the reach that can be obtained while maintaining accuracy. A major problem with long reach system is the back reflection due to SBS and Rayleigh scattering processes which reflect part of the laser light back into the receiver which affect the sensitivity of system. This paper shows a technique to enable a reach of >70km by using a tunable laser and receiver. Techniques for the suppression of receiver noise from SBS and Raleigh scattering are implemented. In addition polarisation dependence of the FBG is considered and results of techniques to limit the effect of polarisation at long and short reaches are shown.

  13. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    PubMed

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). PMID:25133729

  14. On achieving sufficient dual station range accuracy for deep space navigation at zero declination

    NASA Technical Reports Server (NTRS)

    Siegel, H. L.; Christensen, C. S.; Green, D. W.; Winn, F. B.

    1977-01-01

    Since the Voyager Mission will encounter Saturn at a time when the planet will be nearly in the earth's equatorial plane, earth-based orbit determination will be more difficult than usual because of the so-called zero-declination singularity associated with conventional radiometric observations. Simulation studies show that in order to meet the required delivery accuracy at Saturn, a relative range measurement between the Goldstone and Canberra Deep Space Stations must be accurate to 4.5 times the square root of two meters. Topics discussed include the nature of error sources, the methodology and technology required for calibration, the verification process concerning the nearly simultaneous range capability, a description of the ranging system, and tracking strategy.

  15. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, D.P.; Storey, J.C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands-29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1??) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1??) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications. ?? 2002 Elsevier Science Inc. All rights reserved.

  16. a Method to Achieve Large Volume, High Accuracy Photogrammetric Measurements Through the Use of AN Actively Deformable Sensor Mounting Platform

    NASA Astrophysics Data System (ADS)

    Sargeant, B.; Robson, S.; Szigeti, E.; Richardson, P.; El-Nounu, A.; Rafla, M.

    2016-06-01

    When using any optical measurement system one important factor to consider is the placement of the sensors in relation to the workpiece being measured. When making decisions on sensor placement compromises are necessary in selecting the best placement based on the shape and size of the object of interest and the desired resolution and accuracy. One such compromise is in the distance the sensors are placed from the measurement surface, where a smaller distance gives a higher spatial resolution and local accuracy and a greater distance reduces the number of measurements necessary to cover a large area reducing the build-up of errors between measurements and increasing global accuracy. This paper proposes a photogrammetric approach whereby a number of sensors on a continuously flexible mobile platform are used to obtain local measurements while the position of the sensors is determined by a 6DoF tracking solution and the results combined to give a single set of measurement data within a continuous global coordinate system. The ability of this approach to achieve both high accuracy measurement and give results over a large volume is then tested and areas of weakness to be improved upon are identified.

  17. The Confidence-Accuracy Relationship in Eyewitness Identification: Effects of Lineup Instructions, Foil Similarity, and Target-Absent Base Rates

    ERIC Educational Resources Information Center

    Brewer, Neil; Wells, Gary L.

    2006-01-01

    Discriminating accurate from mistaken eyewitness identifications is a major issue facing criminal justice systems. This study examined whether eyewitness confidence assists such decisions under a variety of conditions using a confidence-accuracy (CA) calibration approach. Participants (N = 1,200) viewed a simulated crime and attempted 2 separate…

  18. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  19. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  20. From Genus to Phylum: Large-Subunit and Internal Transcribed Spacer rRNA Operon Regions Show Similar Classification Accuracies Influenced by Database Composition

    PubMed Central

    Liu, Kuan-Liang; Kuske, Cheryl R.

    2014-01-01

    We compared the classification accuracy of two sections of the fungal internal transcribed spacer (ITS) region, individually and combined, and the 5′ section (about 600 bp) of the large-subunit rRNA (LSU), using a naive Bayesian classifier and BLASTN. A hand-curated ITS-LSU training set of 1,091 sequences and a larger training set of 8,967 ITS region sequences were used. Of the factors evaluated, database composition and quality had the largest effect on classification accuracy, followed by fragment size and use of a bootstrap cutoff to improve classification confidence. The naive Bayesian classifier and BLASTN gave similar results at higher taxonomic levels, but the classifier was faster and more accurate at the genus level when a bootstrap cutoff was used. All of the ITS and LSU sections performed well (>97.7% accuracy) at higher taxonomic ranks from kingdom to family, and differences between them were small at the genus level (within 0.66 to 1.23%). When full-length sequence sections were used, the LSU outperformed the ITS1 and ITS2 fragments at the genus level, but the ITS1 and ITS2 showed higher accuracy when smaller fragment sizes of the same length and a 50% bootstrap cutoff were used. In a comparison using the larger ITS training set, ITS1 and ITS2 had very similar accuracy classification for fragments between 100 and 200 bp. Collectively, the results show that any of the ITS or LSU sections we tested provided comparable classification accuracy to the genus level and underscore the need for larger and more diverse classification training sets. PMID:24242255

  1. Cognitive Processing Profiles of School-Age Children Who Meet Low-Achievement, IQ-Discrepancy, or Dual Criteria for Underachievement in Oral Reading Accuracy

    ERIC Educational Resources Information Center

    Van Santen, Frank W.

    2012-01-01

    The purpose of this study was to compare the cognitive processing profiles of school-age children (ages 7 to 17) who met criteria for underachievement in oral reading accuracy based on three different methods: 1) use of a regression-based IQ-achievement discrepancy only (REGonly), 2) use of a low-achievement cutoff only (LAonly), and 3) use of a…

  2. Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

    ERIC Educational Resources Information Center

    Nikelshpur, Dmitry O.

    2014-01-01

    Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…

  3. Strategies for Achieving High Sequencing Accuracy for Low Diversity Samples and Avoiding Sample Bleeding Using Illumina Platform

    PubMed Central

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802

  4. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  5. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  6. A Meta-Analytic Review of Achievement Goal Measures: Different Labels for the Same Constructs or Different Constructs with Similar Labels?

    ERIC Educational Resources Information Center

    Hulleman, Chris S.; Schrager, Sheree M.; Bodmann, Shawn M.; Harackiewicz, Judith M.

    2010-01-01

    This meta-analysis addresses whether achievement goal researchers are using different labels for the same constructs or putting the same labels on different constructs. We systematically examined whether conceptual and methodological differences in the measurement of achievement goals moderated achievement goal intercorrelations and relationships…

  7. Identifying learning disabilities through a cognitive deficit framework: can verbal memory deficits explain similarities between learning disabled and low achieving students?

    PubMed

    Callinan, Sarah; Theiler, Stephen; Cunningham, Everarda

    2015-01-01

    Traditionally, students with learning disabilities (LD) have been identified using an aptitude-achievement discrepancy or response to intervention approach. As profiles of the cognitive deficits of discrepancy-defined students with LD have already been developed using these approaches, these deficits can in turn be used to identify LD using the discrepancy approach as a benchmark for convergent validity. Australian Grade 3 (N = 172) students were administered cognitive processing tests to ascertain whether scores in these tests could accurately allocate students into discrepancy-defined groups using discriminant function analysis. Results showed that 77% to 82% of students could be correctly allocated into LD, low achievement, and regular achievement groups using only measures of phonological processing, rapid naming, and verbal memory. Furthermore, verbal memory deficits were found, along with phonological processing and rapid naming deficits, in students that would be designated as low achieving by the discrepancy method. Because a significant discrepancy or lack of response to intervention is a result of cognitive deficits rather than the other way around, it is argued that LD should be identified via cognitive deficits. PMID:23886581

  8. The Effects of Contingent Praise Upon the Achievement of a Deficit Junior High School Student in Oral Reading Accuracy in Probes Above Her Functional Grade Level.

    ERIC Educational Resources Information Center

    Proe, Susan; Wade, David

    Evaluated was the effectiveness of three training procedures (imitation training, imitation training with praise, and imitation training with points for an art supply contingency) in improving the oral reading accuracy and reading comprehension of a 13-year-old girl whose functional reading was at the second grade level. The procedures were…

  9. Accuracy and consistency of modern elastomeric pumps.

    PubMed

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  10. The coarse pointing assembly for SILEX program or how to achieve outstanding pointing accuracy with simple hardware associated with consistent control laws

    NASA Astrophysics Data System (ADS)

    Buvat, Daniel; Muller, Gerard; Peyrot, Patrick

    1991-06-01

    Attention is given to the coarse pointing assembly (CPA) for the SILEX program, designed on the basis of 10 years of MATRA experience in very accurate drive mechanisms successfully conducted by the SPOT 1 and 2 flights as well as EURECA IOC. The basic key design feature of the mechanism is a 1200-step stepper motor driven in microstepping with harmonic defects compensation. This allows very low torque noise associated with a high accuracy (0.01 deg). The direct drive principle avoids backlash and permits a linear control of the output shaft of each drive. The only parts susceptible to possible wear are the ball bearings, which have a design margin of greater than 1000 for 10 yr of service life. In order to meet the dynamic performances required by SILEX, a closed loop active damping system is added to each drive unit. Two accelerometers used in a differential way sense the hinge microvibrations and an active damping loop reduces their Q factor down to a few dB. All CPA electrical parts (including motor, optical encoder, and accelerometer) are redundant to avoid single point of failure.

  11. Similar names for similar biologics.

    PubMed

    Casadevall, Nicole; Felix, Thomas; Strober, Bruce E; Warnock, David G

    2014-10-01

    Approval of the first biosimilar in the USA may occur by the end of 2014, yet a naming approach for biosimilars has not been determined. Biosimilars are highly similar to their biologic reference product but are not identical to it, because of their structural complexity and variations in manufacturing processes among companies. There is a need for a naming approach that can distinguish a biosimilar from its reference product and other biosimilars and ensure accurate tracing of adverse events (AEs) to the administered product. In contrast, generic small-molecule drugs are identical to their reference product and, therefore, share the same nonproprietary name. Clinical trials required to demonstrate biosimilarity for approval may not detect rare AEs or those occurring after prolonged use, and the incidence of such events may differ between a biosimilar and its reference product. The need for precise biologic identification is further underscored by the possibility of biosimilar interchangeability, a US designation that will allow substitution without prescriber intervention. For several biologics, the US Food and Drug Administration (FDA) has used a naming approach that adds a prefix to a common root nonproprietary name, enabling healthcare providers to distinguish between products, avoid medication errors, and facilitate pharmacovigilance. We recommend that the FDA implement a biosimilars naming policy that likewise would add a distinguishable prefix or suffix to the root nonproprietary name of the reference product. This approach would ensure that a biosimilar could be distinguished from its reference product and other biosimilars in patient records and pharmacovigilance databases/reports, facilitating accurate attribution of AEs. PMID:25001080

  12. Nonrigid image registration using an entropic similarity.

    PubMed

    Khader, Mohammed; Ben Hamza, A

    2011-09-01

    In this paper, we propose a nonrigid image registration technique by optimizing a generalized information-theoretic similarity measure using the quasi-Newton method as an optimization scheme and cubic B-splines for modeling the nonrigid deformation field between the fixed and moving 3-D image pairs. To achieve a compromise between the nonrigid registration accuracy and the associated computational cost, we implement a three-level hierarchical multiresolution approach such that the image resolution is increased in a coarse to fine fashion. Experimental results are provided to demonstrate the registration accuracy of our approach. The feasibility of the proposed method is demonstrated on a 3-D magnetic resonance data volume and also on clinically acquired 4-D CT image datasets. PMID:21690017

  13. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  14. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  15. Privacy-preserving matching of similar patients.

    PubMed

    Vatsalan, Dinusha; Christen, Peter

    2016-02-01

    The identification of similar entities represented by records in different databases has drawn considerable attention in many application areas, including in the health domain. One important type of entity matching application that is vital for quality healthcare analytics is the identification of similar patients, known as similar patient matching. A key component of identifying similar records is the calculation of similarity of the values in attributes (fields) between these records. Due to increasing privacy and confidentiality concerns, using the actual attribute values of patient records to identify similar records across different organizations is becoming non-trivial because the attributes in such records often contain highly sensitive information such as personal and medical details of patients. Therefore, the matching needs to be based on masked (encoded) values while being effective and efficient to allow matching of large databases. Bloom filter encoding has widely been used as an efficient masking technique for privacy-preserving matching of string and categorical values. However, no work on Bloom filter-based masking of numerical data, such as integer (e.g. age), floating point (e.g. body mass index), and modulus (numbers wrap around upon reaching a certain value, e.g. date and time), which are commonly required in the health domain, has been presented in the literature. We propose a framework with novel methods for masking numerical data using Bloom filters, thereby facilitating the calculation of similarities between records. We conduct an empirical study on publicly available real-world datasets which shows that our framework provides efficient masking and achieves similar matching accuracy compared to the matching of actual unencoded patient records. PMID:26707453

  16. Reconstructing propagation networks with temporal similarity

    PubMed Central

    Liao, Hao; Zeng, An

    2015-01-01

    Node similarity significantly contributes to the growth of real networks. In this paper, based on the observed epidemic spreading results we apply the node similarity metrics to reconstruct the underlying networks hosting the propagation. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops nearly to zero. To improve the similarity-based reconstruction method, we propose a temporal similarity metric which takes into account the time information of the spreading. The reconstruction results are remarkably improved with the new method. PMID:26086198

  17. Reconstructing propagation networks with temporal similarity.

    PubMed

    Liao, Hao; Zeng, An

    2015-01-01

    Node similarity significantly contributes to the growth of real networks. In this paper, based on the observed epidemic spreading results we apply the node similarity metrics to reconstruct the underlying networks hosting the propagation. We find that the reconstruction accuracy of the similarity metrics is strongly influenced by the infection rate of the spreading process. Moreover, there is a range of infection rate in which the reconstruction accuracy of some similarity metrics drops nearly to zero. To improve the similarity-based reconstruction method, we propose a temporal similarity metric which takes into account the time information of the spreading. The reconstruction results are remarkably improved with the new method. PMID:26086198

  18. The accuracy of automatic tracking

    NASA Technical Reports Server (NTRS)

    Kastrov, V. V.

    1974-01-01

    It has been generally assumed that tracking accuracy changes similarly to the rate of change of the curve of the measurement conversion. The problem that internal noise increases along with the signals processed by the tracking device and that tracking accuracy thus drops were considered. The main prerequisite for solution is consideration of the dependences of the output signal of the tracking device sensor not only on the measured parameter but on the signal itself.

  19. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  20. Coupled attribute similarity learning on categorical data.

    PubMed

    Wang, Can; Dong, Xiangjun; Zhou, Fei; Cao, Longbing; Chi, Chi-Hung

    2015-04-01

    Attribute independence has been taken as a major assumption in the limited research that has been conducted on similarity analysis for categorical data, especially unsupervised learning. However, in real-world data sources, attributes are more or less associated with each other in terms of certain coupling relationships. Accordingly, recent works on attribute dependency aggregation have introduced the co-occurrence of attribute values to explore attribute coupling, but they only present a local picture in analyzing categorical data similarity. This is inadequate for deep analysis, and the computational complexity grows exponentially when the data scale increases. This paper proposes an efficient data-driven similarity learning approach that generates a coupled attribute similarity measure for nominal objects with attribute couplings to capture a global picture of attribute similarity. It involves the frequency-based intra-coupled similarity within an attribute and the inter-coupled similarity upon value co-occurrences between attributes, as well as their integration on the object level. In particular, four measures are designed for the inter-coupled similarity to calculate the similarity between two categorical values by considering their relationships with other attributes in terms of power set, universal set, joint set, and intersection set. The theoretical analysis reveals the equivalent accuracy and superior efficiency of the measure based on the intersection set, particularly for large-scale data sets. Intensive experiments of data structure and clustering algorithms incorporating the coupled dissimilarity metric achieve a significant performance improvement on state-of-the-art measures and algorithms on 13 UCI data sets, which is confirmed by the statistical analysis. The experiment results show that the proposed coupled attribute similarity is generic, and can effectively and efficiently capture the intrinsic and global interactions within and between

  1. Multivariate Time Series Similarity Searching

    PubMed Central

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  2. Multivariate time series similarity searching.

    PubMed

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  3. Walking on a user similarity network towards personalized recommendations.

    PubMed

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  4. Walking on a User Similarity Network towards Personalized Recommendations

    PubMed Central

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  5. Accurate Prediction of Docked Protein Structure Similarity.

    PubMed

    Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit

    2015-09-01

    One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807

  6. Improving Volunteered Geographic Data Quality Using Semantic Similarity Measurements

    NASA Astrophysics Data System (ADS)

    Vandecasteele, A.; Devillers, R.

    2013-05-01

    Studies have analysed the quality of volunteered geographic information (VGI) datasets, assessing the positional accuracy of features and the completeness of specific attributes. While it has been shown that VGI can, in some context, reach a high positional accuracy, these works have also highlighted a large spatial heterogeneity in positional accuracy, completeness but also with regards to the semantics of the objects. Such high semantic heterogeneity of VGI datasets becomes a significant obstacle to a number of possible uses that could be made of the data. This paper proposes an approach for both improving the semantic quality and reducing the semantic heterogeneity of VGI dat asets. The improvement of the semantic quality is achieved by automatically suggesting attributes to contributors during the editing process. The reduction of semantic heterogeneity is achieved by automatically notifying contributors when two attributes are too similar or too dissimilar. The approach was implemented into a plugin for OpenStreetMap and different examples illustrate how this plugin can be used to improve the quality of VGI data.

  7. Vertex similarity in networks

    NASA Astrophysics Data System (ADS)

    Leicht, E. A.; Holme, Petter; Newman, M. E. J.

    2006-02-01

    We consider methods for quantifying the similarity of vertices in networks. We propose a measure of similarity based on the concept that two vertices are similar if their immediate neighbors in the network are themselves similar. This leads to a self-consistent matrix formulation of similarity that can be evaluated iteratively using only a knowledge of the adjacency matrix of the network. We test our similarity measure on computer-generated networks for which the expected results are known, and on a number of real-world networks.

  8. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  9. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  10. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  11. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  12. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  13. Biosimilar insulins: how similar is similar?

    PubMed

    Heinemann, Lutz; Hompesch, Marcus

    2011-05-01

    Biosimilar insulins (BIs) are viewed as commercially attractive products by a number of companies. In order to obtain approval in the European Union or the United States, where there is not a single BI currently on the market, a manufacturer needs to demonstrate that a given BI has a safety and efficacy profile that is similar to that of the "original" insulin formulation that is already on the market. As trivial as this may appear at first glance, it is not trivial at all for a good number of reasons that will be discussed in this commentary. As with protein manufacturing, modifications in the structure of the insulin molecule can take place (which can have serious consequences for the biological effects induced), so a rigid and careful assessment is absolutely necessary. The example of Marvel's failed application with the European Medicines Agency provides insights into the regulatory and clinical challenges surrounding the matter of BI. Although a challenging BI approval process might be regarded as a hurdle to keep companies out of certain markets, it is fair to say that the potential safety and efficacy issues surrounding BI are substantial and relevant and do warrant a careful and evidence-driven approval process. PMID:21722590

  14. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  15. Gender similarities and differences.

    PubMed

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research. PMID:23808917

  16. Similarity by compression.

    PubMed

    Melville, James L; Riley, Jenna F; Hirst, Jonathan D

    2007-01-01

    We present a simple and effective method for similarity searching in virtual high-throughput screening, requiring only a string-based representation of the molecules (e.g., SMILES) and standard compression software, available on all modern desktop computers. This method utilizes the normalized compression distance, an approximation of the normalized information distance, based on the concept of Kolmogorov complexity. On representative data sets, we demonstrate that compression-based similarity searching can outperform standard similarity searching protocols, exemplified by the Tanimoto coefficient combined with a binary fingerprint representation and data fusion. Software to carry out compression-based similarity is available from our Web site at http://comp.chem.nottingham.ac.uk/download/zippity. PMID:17238245

  17. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  18. Multivariate Hypergeometric Similarity Measure

    PubMed Central

    Kaddi, Chanchala D.; Parry, R. Mitchell; Wang, May D.

    2016-01-01

    We propose a similarity measure based on the multivariate hypergeometric distribution for the pairwise comparison of images and data vectors. The formulation and performance of the proposed measure are compared with other similarity measures using synthetic data. A method of piecewise approximation is also implemented to facilitate application of the proposed measure to large samples. Example applications of the proposed similarity measure are presented using mass spectrometry imaging data and gene expression microarray data. Results from synthetic and biological data indicate that the proposed measure is capable of providing meaningful discrimination between samples, and that it can be a useful tool for identifying potentially related samples in large-scale biological data sets. PMID:24407308

  19. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  20. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  1. Similarity of molecular shape.

    PubMed

    Meyer, A Y; Richards, W G

    1991-10-01

    The similarity of one molecule to another has usually been defined in terms of electron densities or electrostatic potentials or fields. Here it is expressed as a function of the molecular shape. Formulations of similarity (S) reduce to very simple forms, thus rendering the computerised calculation straightforward and fast. 'Elements of similarity' are identified, in the same spirit as 'elements of chirality', except that the former are understood to be variable rather than present-or-absent. Methods are presented which bypass the time-consuming mathematical optimisation of the relative orientation of the molecules. Numerical results are presented and examined, with emphasis on the similarity of isomers. At the extreme, enantiomeric pairs are considered, where it is the dissimilarity (D = 1 - S) that is of consequence. We argue that chiral molecules can be graded by dissimilarity, and show that D is the shape-analog of the 'chirality coefficient', with the simple form of the former opening up numerical access to the latter. PMID:1770379

  2. The Qualitative Similarity Hypothesis

    ERIC Educational Resources Information Center

    Paul, Peter V.; Lee, Chongmin

    2010-01-01

    Evidence is presented for the qualitative similarity hypothesis (QSH) with respect to children and adolescents who are d/Deaf or hard of hearing. The primary focus is on the development of English language and literacy skills, and some information is provided on the acquisition of English as a second language. The QSH is briefly discussed within…

  3. Wear Independent Similarity.

    PubMed

    Steele, Adam; Davis, Alexander; Kim, Joohyung; Loth, Eric; Bayer, Ilker S

    2015-06-17

    This study presents a new factor that can be used to design materials where desired surface properties must be retained under in-system wear and abrasion. To demonstrate this factor, a synthetic nonwetting coating is presented that retains chemical and geometric performance as material is removed under multiple wear conditions: a coarse vitrified abradant (similar to sanding), a smooth abradant (similar to rubbing), and a mild abradant (a blend of sanding and rubbing). With this approach, such a nonwetting material displays unprecedented mechanical durability while maintaining desired performance under a range of demanding conditions. This performance, herein termed wear independent similarity performance (WISP), is critical because multiple mechanisms and/or modes of wear can be expected to occur in many typical applications, e.g., combinations of abrasion, rubbing, contact fatigue, weathering, particle impact, etc. Furthermore, these multiple wear mechanisms tend to quickly degrade a novel surface's unique performance, and thus many promising surfaces and materials never scale out of research laboratories. Dynamic goniometry and scanning electron microscopy results presented herein provide insight into these underlying mechanisms, which may also be applied to other coatings and materials. PMID:26018058

  4. Indexing Similar DNA Sequences

    NASA Astrophysics Data System (ADS)

    Huang, Songbo; Lam, T. W.; Sung, W. K.; Tam, S. L.; Yiu, S. M.

    To study the genetic variations of a species, one basic operation is to search for occurrences of patterns in a large number of very similar genomic sequences. To build an indexing data structure on the concatenation of all sequences may require a lot of memory. In this paper, we propose a new scheme to index highly similar sequences by taking advantage of the similarity among the sequences. To store r sequences with k common segments, our index requires only O(n + NlogN) bits of memory, where n is the total length of the common segments and N is the total length of the distinct regions in all texts. The total length of all sequences is rn + N, and any scheme to store these sequences requires Ω(n + N) bits. Searching for a pattern P of length m takes O(m + m logN + m log(rk)psc(P) + occlogn), where psc(P) is the number of prefixes of P that appear as a suffix of some common segments and occ is the number of occurrences of P in all sequences. In practice, rk ≤ N, and psc(P) is usually a small constant. We have implemented our solution and evaluated our solution using real DNA sequences. The experiments show that the memory requirement of our solution is much less than that required by BWT built on the concatenation of all sequences. When compared to the other existing solution (RLCSA), we use less memory with faster searching time.

  5. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  6. Information filtering by similarity-preferential diffusion processes

    NASA Astrophysics Data System (ADS)

    Zeng, An; Vidmer, Alexandre; Medo, Matúš; Zhang, Yi-Cheng

    2014-03-01

    Recommender systems provide a promising way to address the information overload problem which is common in online systems. Based on past user preferences, a recommender system can find items that are likely to be relevant to a given user. Two classical physical processes, mass diffusion and heat conduction, have been used to design recommendation algorithms and a hybrid process based on them has been shown to provide accurate and diverse recommendation results. We modify both processes as well as their hybrid by introducing a parameter which can be used to enhance or suppress the weight of users who are most similar to the target user for whom the recommendation is done. Evaluation on two benchmark data sets demonstrates that both recommendation accuracy and diversity are improved for a wide range of parameter values. Threefold validation indicates that the achieved results are robust and the new recommendation methods are thus applicable in practice.

  7. The qualitative similarity hypothesis.

    PubMed

    Paul, Peter V; Lee, Chongmin

    2010-01-01

    Evidence is presented for the qualitative similarity hypothesis (QSH) with respect to children and adolescents who are d/Deaf or hard of hearing. The primary focus is on the development of English language and literacy skills, and some information is provided on the acquisition of English as a second language. The QSH is briefly discussed within the purview of two groups of cognitive models: those that emphasize the cognitive development of individuals and those that pertain to disciplinary or knowledge structures. It is argued that the QSH has scientific merit with implications for classroom instruction. Future research should examine the validity of the QSH in other disciplines such as mathematics and science and should include perspectives from social as well as cognitive models. PMID:20415280

  8. Self Similar Optical Fiber

    NASA Astrophysics Data System (ADS)

    Lai, Zheng-Xuan

    This research proposes Self Similar optical fiber (SSF) as a new type of optical fiber. It has a special core that consists of self similar structure. Such a structure is obtained by following the formula for generating iterated function systems (IFS) in Fractal Theory. The resulted SSF can be viewed as a true fractal object in optical fibers. In addition, the method of fabricating SSF makes it possible to generate desired structures exponentially in numbers, whereas it also allows lower scale units in the structure to be reduced in size exponentially. The invention of SSF is expected to greatly ease the production of optical fiber when a large number of small hollow structures are needed in the core of the optical fiber. This dissertation will analyze the core structure of SSF based on fractal theory. Possible properties from the structural characteristics and the corresponding applications are explained. Four SSF samples were obtained through actual fabrication in a laboratory environment. Different from traditional conductive heating fabrication system, I used an in-house designed furnace that incorporated a radiation heating method, and was equipped with automated temperature control system. The obtained samples were examined through spectrum tests. Results from the tests showed that SSF does have the optical property of delivering light in a certain wavelength range. However, SSF as a new type of optical fiber requires a systematic research to find out the theory that explains its structure and the associated optical properties. The fabrication and quality of SSF also needs to be improved for product deployment. As a start of this extensive research, this dissertation work opens the door to a very promising new area in optical fiber research.

  9. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  10. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  11. Accuracy analysis of automatic distortion correction

    NASA Astrophysics Data System (ADS)

    Kolecki, Jakub; Rzonca, Antoni

    2015-06-01

    The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested.

  12. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  13. Selection of USSR foreign similarity regions

    NASA Technical Reports Server (NTRS)

    Disler, J. M. (Principal Investigator)

    1982-01-01

    The similarity regions in the United States and Canada were selected to parallel the conditions that affect labeling and classification accuracies in the U.S.S.R. indicator regions. In addition to climate, a significant condition that affects labeling and classification accuracies in the U.S.S.R. is the proportion of barley and wheat grown in a given region (based on sown areas). The following regions in the United States and Canada were determined to be similar to the U.S.S.R. indicator regions: (1) Montana agrophysical unit (APU) 104 corresponds to the Belorussia high barley region; (2) North Dakota and Minnesota APU 20 and secondary region southern Manitoba and Saskatchewan correspond to the Ural RSFSR barley and spring wheat region; (3) Montana APU 23 corresponds to he North Caucasus barley and winter wheat region. Selection criteria included climates, crop type, crop distribution, growth cycles, field sizes, and field shapes.

  14. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  15. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  16. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  17. Modified principal component analysis: an integration of multiple similarity subspace models.

    PubMed

    Fan, Zizhu; Xu, Yong; Zuo, Wangmeng; Yang, Jian; Tang, Jinhui; Lai, Zhihui; Zhang, David

    2014-08-01

    We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel feature selection approach. This approach needs to construct a kind of vector set, termed weak machine cell (WMC), which contains an appropriate number of the eigenvectors spanning the similarity subspaces. Combining the wrapper method and the forward selection scheme, MPCA selects a WMC at a time that has a powerful discriminative capability to classify samples. MPCA is very suitable for the application scenarios in which the number of the training samples is less than the data dimensionality. MPCA outperforms the other state-of-the-art PCA-based methods in terms of both classification accuracy and clustering result. In addition, MPCA can be applied to face image reconstruction. MPCA can use other types of similarity measurements. Extensive experiments on many popular real-world data sets, such as face databases, show that MPCA achieves desirable classification results, as well as has a powerful capability to represent data. PMID:25050950

  18. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  19. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  20. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  1. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  2. Affecting speed and accuracy in perception.

    PubMed

    Bocanegra, Bruno R

    2014-12-01

    An account of affective modulations in perceptual speed and accuracy (ASAP: Affecting Speed and Accuracy in Perception) is proposed and tested. This account assumes an emotion-induced inhibitory interaction between parallel channels in the visual system that modulates the onset latencies and response durations of visual signals. By trading off speed and accuracy between channels, this mechanism achieves (a) fast visuo-motor responding to course-grained information, and (b) accurate visuo-attentional selection of fine-grained information. ASAP gives a functional account of previously counterintuitive findings, and may be useful for explaining affective influences in both featural-level single-stimulus tasks and object-level multistimulus tasks. PMID:24853268

  3. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources. PMID:27329206

  4. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  5. A configurable-hardware document-similarity classifier to detect web attacks.

    SciTech Connect

    Ulmer, Craig D.; Gokhale, Maya

    2010-04-01

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  6. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  7. The Mechanics of Human Achievement

    PubMed Central

    Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.

    2015-01-01

    Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers. PMID:26236393

  8. Accuracies and conservation errors of various ghost fluid methods for multi-medium Riemann problem

    NASA Astrophysics Data System (ADS)

    Xu, Liang; Liu, Tiegang

    2011-06-01

    Since the (original) ghost fluid method (OGFM) was proposed by Fedkiw et al. in 1999 [5], a series of other GFM-based methods such as the gas-water version GFM (GWGFM), the modified GFM (MGFM) and the real GFM (RGFM) have been developed subsequently. Systematic analysis, however, has yet to be carried out for the various GFMs on their accuracies and conservation errors. In this paper, we develop a technique to rigorously analyze the accuracies and conservation errors of these different GFMs when applied to the multi-medium Riemann problem with a general equation of state (EOS). By analyzing and comparing the interfacial state provided by each GFM to the exact one of the original multi-medium Riemann problem, we show that the accuracy of interfacial treatment can achieve "third-order accuracy" in the sense of comparing to the exact solution of the original mutli-medium Riemann problem for the MGFM and the RGFM, while it is of at most "first-order accuracy" for the OGFM and the GWGFM when the interface approach is actually near in balance. Similar conclusions are also obtained in association with the local conservation errors. A special test method is exploited to validate these theoretical conclusions from the numerical viewpoint.

  9. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    SciTech Connect

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  10. Predicting missing links via structural similarity

    NASA Astrophysics Data System (ADS)

    Lyu, Guo-Dong; Fan, Chang-Jun; Yu, Lian-Fei; Xiu, Bao-Xin; Zhang, Wei-Ming

    2015-04-01

    Predicting missing links in networks plays a significant role in modern science. On the basis of structural similarity, our paper proposes a new node-similarity-based measure called biased resource allocation (BRA), which is motivated by the resource allocation (RA) measure. Comparisons between BRA and nine well-known node-similarity-based measures on five real networks indicate that BRA performs no worse than RA, which was the best node-similarity-based index in previous researches. Afterwards, based on localPath (LP) and Katz measure, we propose another two improved measures, named Im-LocalPath and Im-Katz respectively. Numerical results show that the prediction accuracy of both Im-LP and Im-Katz measure improve compared with the original LP and Katz measure. Finally, a new path-similarity-based measure and its improved measure, called LYU and Im-LYU measure, are proposed and especially, Im-LYU measure is shown to perform more remarkably than other mentioned measures.

  11. Bilateral Trade Flows and Income Distribution Similarity

    PubMed Central

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  12. Bilateral Trade Flows and Income Distribution Similarity.

    PubMed

    Martínez-Zarzoso, Inmaculada; Vollmer, Sebastian

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980-2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  13. Graded Achievement, Tested Achievement, and Validity

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  14. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  15. Exemplar Similarity and Rule Application

    ERIC Educational Resources Information Center

    Hahn, Ulrike; Prat-Sala, Merce; Pothos, Emmanuel M.; Brumby, Duncan P.

    2010-01-01

    We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on…

  16. Acoustic Similarity and Dichotic Listening.

    ERIC Educational Resources Information Center

    Benson, Peter

    1978-01-01

    An experiment tests conjectures that right ear advantage (REA) has an auditory origin in competition or interference between acoustically similar stimuli and that feature-sharing effect (FSE) has its origin in assignment of features of phonetically similar stimuli. No effect on the REA for acoustic similarity, and a clear effect of acoustic…

  17. Functional Similarity and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    Neimeyer, Greg J.; Neimeyer, Robert A.

    1981-01-01

    Students participated in dyadic disclosure exercises over a five-week period. Results indicated members of high functional similarity dyads evidenced greater attraction to one another than did members of low functional similarity dyads. "Friendship" pairs of male undergraduates displayed greater functional similarity than did "nominal" pairs from…

  18. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  19. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  20. Children with Autism Detect Targets at Very Rapid Presentation Rates with Similar Accuracy as Adults

    ERIC Educational Resources Information Center

    Hagmann, Carl Erick; Wyble, Bradley; Shea, Nicole; LeBlanc, Megan; Kates, Wendy R.; Russo, Natalie

    2016-01-01

    Enhanced perception may allow for visual search superiority by individuals with Autism Spectrum Disorder (ASD), but does it occur over time? We tested high-functioning children with ASD, typically developing (TD) children, and TD adults in two tasks at three presentation rates (50, 83.3, and 116.7 ms/item) using rapid serial visual presentation.…

  1. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  2. High accuracy in short ISS missions

    NASA Astrophysics Data System (ADS)

    Rüeger, J. M.

    1986-06-01

    Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.

  3. Finding Protein and Nucleotide Similarities with FASTA.

    PubMed

    Pearson, William R

    2016-01-01

    The FASTA programs provide a comprehensive set of rapid similarity searching tools (fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local, and global similarity searches (ssearch36, ggsearch36), and for searching with short peptides and oligonucleotides (fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity. The FASTA programs can produce "BLAST-like" alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases. The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. © 2016 by John Wiley & Sons, Inc. PMID:27010337

  4. A framework for profile similarity: integrating similarity, normativeness, and distinctiveness.

    PubMed

    Furr, R Michael

    2008-10-01

    Many questions in personality psychology lend themselves to the analysis of profile similarity. A profile approach to issues such as personality judgment, personality similarity, behavioral consistency, developmental stability, and person-environment fit is intuitively appealing. However, it entails conceptual and statistical challenges arising from the overlap among profile similarity and normativeness, which presents potential confounds and potential opportunities. This article describes the normativeness problem, articulating the need to evaluate profile similarity alongside normativeness and distinctiveness. It presents conceptual and psychometric foundations of a framework differentiating these elements for pairs of profiles. It derives two models from this framework, and it discusses the application of their components to a variety of research domains. Finally, it presents recommendations and implications regarding the use of these components and profile similarity more generally. This approach can reveal and manage potential confounds, and it can provide theoretical insights that might otherwise be overlooked. PMID:18705644

  5. Dynamic similarity in erosional processes

    USGS Publications Warehouse

    Scheidegger, A.E.

    1963-01-01

    A study is made of the dynamic similarity conditions obtaining in a variety of erosional processes. The pertinent equations for each type of process are written in dimensionless form; the similarity conditions can then easily be deduced. The processes treated are: raindrop action, slope evolution and river erosion. ?? 1963 Istituto Geofisico Italiano.

  6. The Effect of Moderate and High-Intensity Fatigue on Groundstroke Accuracy in Expert and Non-Expert Tennis Players

    PubMed Central

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in

  7. Thermodynamic similarity of physical systems

    NASA Astrophysics Data System (ADS)

    Ciccariello, Salvino

    2016-02-01

    Two different physical systems A and B are said to be thermodynamically similar if one of the thermodynamic potentials of system A is proportional to the corresponding potential of B after expressing the state variables of system A in terms of those of B by a transformation reversible throughout the state variables' domain. The thermodynamic similarity has a transitive nature so that physical systems divide into classes of thermodynamically similar systems that have similar phase diagrams. Considering the simplest physical systems, one finds that a class of thermodynamically similar systems is formed by the ideal classical gas, the Fermi and the Bose ideal quantum gases, whatever the dimensions of the confining spaces, and the one dimensional hard rod gas. Another class is formed by the physical systems characterized by interactions that coincide by a scaling of the distance and the coupling constant.

  8. Does Language about Similarity Play a Role in Fostering Similarity Comparison in Children?

    ERIC Educational Resources Information Center

    Ozcaliskan, Seyda; Goldin-Meadow, Susan; Gentner, Dedre; Mylander, Carolyn

    2009-01-01

    Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of…

  9. Studies on dynamic motion compensation and positioning accuracy on star tracker.

    PubMed

    Jun, Zhang; Yuncai, Hao; Li, Wang; Da, Liu

    2015-10-01

    Error from motion is the dominant restriction on the improvement of dynamic performance on a star tracker. As a remarkable motion error, the degree of nonuniformity of the star image velocity field on the detector is studied, and thus a general model for the moving star spot is built. To minimize velocity nonuniformity, a novel general method is proposed to derive the proper motion compensation and location accuracy in cases of both uniform velocity and acceleration. Using this method, a theoretic analysis on the accuracy of time-delayed integration and similar techniques, which are thought of as state-of-the-art approaches to reduce error from motion, is conducted. The simulations and experimental results validate the proposed method. Our method shows a more steady performance than the dynamic binning algorithm. The positional error could be neglected when the smear length is far less than 3.464 times the scale of star spot, which suggests accuracy can be maintained by changing frame-integration time inverse proportional to the velocity on the focal plane. It also shows that the acceleration effect must be compensated to achieve accuracy close to the Cramér-Rao lower bound. PMID:26479618

  10. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  11. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  12. Semantic Similarity in Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Faria, Daniel; Falcão, André O.; Lord, Phillip; Couto, Francisco M.

    2009-01-01

    In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research. PMID:19649320

  13. Renewing the respect for similarity

    PubMed Central

    Edelman, Shimon; Shahbazi, Reza

    2012-01-01

    In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemming from its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problem at hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, by surveying established results and new developments in the theory and methods of similarity-preserving associative lookup and dimensionality reduction—critical components of many cognitive functions, as well as of intelligent data management in computer vision. We focus in particular on the growing family of algorithms that support associative memory by performing hashing that respects local similarity, and on the uses of similarity in representing structured objects and scenes. Insofar as these similarity-based ideas and methods are useful in cognitive modeling and in AI applications, they should be included in the core conceptual toolkit of computational neuroscience. In support of this stance, the present paper (1) offers a discussion of conceptual, mathematical, computational, and empirical aspects of similarity, as applied to the problems of visual object and scene representation, recognition, and interpretation, (2) mentions some key computational problems arising in attempts to put similarity to use, along with their possible solutions, (3) briefly states a previously developed similarity-based framework for visual object representation, the Chorus of Prototypes, along with the empirical support it enjoys, (4) presents new mathematical insights into the effectiveness of this framework, derived from its relationship to locality-sensitive hashing (LSH) and to concomitant statistics, (5) introduces a new model, the Chorus of Relational Descriptors (ChoRD), that extends this framework to scene representation and interpretation, (6) describes its implementation and testing, and finally (7) suggests possible directions in which the present research program can be

  14. Nonlocal similarity based DEM super resolution

    NASA Astrophysics Data System (ADS)

    Xu, Zekai; Wang, Xuewen; Chen, Zixuan; Xiong, Dongping; Ding, Mingyue; Hou, Wenguang

    2015-12-01

    This paper discusses a new topic, DEM super resolution, to improve the resolution of an original DEM based on its partial new measurements obtained with high resolution. A nonlocal algorithm is introduced to perform this task. The original DEM was first divided into overlapping patches, which were classified either as "test" or "learning" data depending on whether or not they are related to high resolution measurements. For each test patch, the similar patches in the learning dataset were identified via template matching. Finally, the high resolution DEM of the test patch was restored by the weighted sum of similar patches under the condition that the reconstruction weights were the same in different resolution cases. A key assumption of this strategy is that there are some repeated or similar modes in the original DEM, which is quite common. Experiments were done to demonstrate that we can restore a DEM by preserving the details without introducing artifacts. Statistic analysis was also conducted to show that this method can obtain higher accuracy than traditional interpolation methods.

  15. Hierarchical similarity transformations between Gaussian mixtures.

    PubMed

    Rigas, George; Nikou, Christophoros; Goletsis, Yorgos; Fotiadis, Dimitrios I

    2013-11-01

    In this paper, we propose a method to estimate the density of a data space represented by a geometric transformation of an initial Gaussian mixture model. The geometric transformation is hierarchical, and it is decomposed into two steps. At first, the initial model is assumed to undergo a global similarity transformation modeled by translation, rotation, and scaling of the model components. Then, to increase the degrees of freedom of the model and allow it to capture fine data structures, each individual mixture component may be transformed by another, local similarity transformation, whose parameters are distinct for each component of the mixture. In addition, to constrain the order of magnitude of the local transformation (LT) with respect to the global transformation (GT), zero-mean Gaussian priors are imposed onto the local parameters. The estimation of both GT and LT parameters is obtained through the expectation maximization framework. Experiments on artificial data are conducted to evaluate the proposed model, with varying data dimensionality, number of model components, and transformation parameters. In addition, the method is evaluated using real data from a speech recognition task. The obtained results show a high model accuracy and demonstrate the potential application of the proposed method to similar classification problems. PMID:24808615

  16. Vibrational Spectroscopy of HD{sup +} with 2-ppb Accuracy

    SciTech Connect

    Koelemeij, J. C. J.; Roth, B.; Wicht, A.; Ernsting, I.; Schiller, S.

    2007-04-27

    By measurement of the frequency of a vibrational overtone transition in the molecular hydrogen ion HD{sup +}, we demonstrate the first optical spectroscopy of trapped molecular ions with submegahertz accuracy. We use a diode laser, locked to a stable frequency comb, to perform resonance-enhanced multiphoton dissociation spectroscopy on sympathetically cooled HD{sup +} ions at 50 mK. The achieved 2-ppb relative accuracy is a factor of 150 higher than previous results for HD{sup +}, and the measured transition frequency agrees well with recent high-accuracy ab initio calculations, which include high-order quantum electrodynamic effects. We also show that our method bears potential for achieving considerably higher accuracy and may, if combined with slightly improved theoretical calculations, lead to a new and improved determination of the electron-proton mass ratio.

  17. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  18. Self-similar aftershock rates.

    PubMed

    Davidsen, Jörn; Baiesi, Marco

    2016-08-01

    In many important systems exhibiting crackling noise-an intermittent avalanchelike relaxation response with power-law and, thus, self-similar distributed event sizes-the "laws" for the rate of activity after large events are not consistent with the overall self-similar behavior expected on theoretical grounds. This is particularly true for the case of seismicity, and a satisfying solution to this paradox has remained outstanding. Here, we propose a generalized description of the aftershock rates which is both self-similar and consistent with all other known self-similar features. Comparing our theoretical predictions with high-resolution earthquake data from Southern California we find excellent agreement, providing particularly clear evidence for a unified description of aftershocks and foreshocks. This may offer an improved framework for time-dependent seismic hazard assessment and earthquake forecasting. PMID:27627324

  19. Earthquake detection through computationally efficient similarity search

    PubMed Central

    Yoon, Clara E.; O’Reilly, Ossian; Bergen, Karianne J.; Beroza, Gregory C.

    2015-01-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection—identification of seismic events in continuous data—is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact “fingerprints” of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  20. Earthquake detection through computationally efficient similarity search.

    PubMed

    Yoon, Clara E; O'Reilly, Ossian; Bergen, Karianne J; Beroza, Gregory C

    2015-12-01

    Seismology is experiencing rapid growth in the quantity of data, which has outpaced the development of processing algorithms. Earthquake detection-identification of seismic events in continuous data-is a fundamental operation for observational seismology. We developed an efficient method to detect earthquakes using waveform similarity that overcomes the disadvantages of existing detection methods. Our method, called Fingerprint And Similarity Thresholding (FAST), can analyze a week of continuous seismic waveform data in less than 2 hours, or 140 times faster than autocorrelation. FAST adapts a data mining algorithm, originally designed to identify similar audio clips within large databases; it first creates compact "fingerprints" of waveforms by extracting key discriminative features, then groups similar fingerprints together within a database to facilitate fast, scalable search for similar fingerprint pairs, and finally generates a list of earthquake detections. FAST detected most (21 of 24) cataloged earthquakes and 68 uncataloged earthquakes in 1 week of continuous data from a station located near the Calaveras Fault in central California, achieving detection performance comparable to that of autocorrelation, with some additional false detections. FAST is expected to realize its full potential when applied to extremely long duration data sets over a distributed network of seismic stations. The widespread application of FAST has the potential to aid in the discovery of unexpected seismic signals, improve seismic monitoring, and promote a greater understanding of a variety of earthquake processes. PMID:26665176

  1. Representation is representation of similarities.

    PubMed

    Edelman, S

    1998-08-01

    Advanced perceptual systems are faced with the problem of securing a principled (ideally, veridical) relationship between the world and its internal representation. I propose a unified approach to visual representation, addressing the need for superordinate and basic-level categorization and for the identification of specific instances of familiar categories. According to the proposed theory, a shape is represented internally by the responses of a small number of tuned modules, each broadly selective for some reference shape, whose similarity to the stimulus it measures. This amounts to embedding the stimulus in a low-dimensional proximal shape space spanned by the outputs of the active modules. This shape space supports representations of distal shape similarities that are veridical as Shepard's (1968) second-order isomorphisms (i.e., correspondence between distal and proximal similarities among shapes, rather than between distal shapes and their proximal representations). Representation in terms of similarities to reference shapes supports processing (e.g., discrimination) of shapes that are radically different from the reference ones, without the need for the computationally problematic decomposition into parts required by other theories. Furthermore, a general expression for similarity between two stimuli, based on comparisons to reference shapes, can be used to derive models of perceived similarity ranging from continuous, symmetric, and hierarchical ones, as in multidimensional scaling (Shepard 1980), to discrete and nonhierarchical ones, as in the general contrast models (Shepard & Arabie 1979; Tversky 1977). PMID:10097019

  2. Noise limitations on monopulse accuracy in a multibeam antenna

    NASA Astrophysics Data System (ADS)

    Loraine, J.; Wallington, J. R.

    A multibeam system allowing target tracking using monopulse processing switched from beamset to beamset is considered. Attention is given to the accuracy of target angular position estimation. An analytical method is used to establish performance limits under low SNR conditions for a multibeam system. It is shown that, in order to achieve accuracies comparable to those of conventional monopulse systems, much higher SNRs are needed.

  3. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  4. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  5. Accuracy Limitations in Long-Trace Profilometry

    SciTech Connect

    Takacs, Peter Z.; Qian Shinan

    2004-05-12

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface 'macroroughness' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface 'macroroughness' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  6. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  7. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  8. Limits on the Accuracy of Linking. Research Report. ETS RR-10-22

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2010-01-01

    Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…

  9. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  10. Research on similarity measurement for texture image retrieval.

    PubMed

    Zhu, Zhengli; Zhao, Chunxia; Hou, Yingkun

    2012-01-01

    A complete texture image retrieval system includes two techniques: texture feature extraction and similarity measurement. Specifically, similarity measurement is a key problem for texture image retrieval study. In this paper, we present an effective similarity measurement formula. The MIT vision texture database, the Brodatz texture database, and the Outex texture database were used to verify the retrieval performance of the proposed similarity measurement method. Dual-tree complex wavelet transform and nonsubsampled contourlet transform were used to extract texture features. Experimental results show that the proposed similarity measurement method achieves better retrieval performance than some existing similarity measurement methods. PMID:23049785

  11. Quantifying Similarity in Seismic Polarizations

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.; Jones, J. P.; Caffagni, E.

    2015-12-01

    Measuring similarity in seismic attributes can help identify tremor, low S/N signals, and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via. computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in signal-to-noise (S/N) ratio. Using records of the Mw=8.3 Sea of Okhotsk earthquake from CNSN broadband sensors in British Columbia and Yukon Territory, Canada, and vertical borehole array data from a monitoring experiment at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Because histogram distance metrics are bounded by [0 1], clustering allows empirical time-frequency separation of seismic phase arrivals on single-station three-component records. Array processing for automatic seismic phase classification may be possible using subspace clustering of polarization similarity, but efficient algorithms are required to reduce the dimensionality.

  12. Self-similar mitochondrial DNA.

    PubMed

    Oiwa, Nestor N; Glazier, James A

    2004-01-01

    We show that repeated sequences, like palindromes (local repetitions) and homologies between two different nucleotide sequences (motifs along the genome), compose a self-similar (fractal) pattern in mitochondrial DNA. This self-similarity comes from the looplike structures distributed along the genome. The looplike structures generate scaling laws in a pseudorandom DNA walk constructed from the sequence, called a Lévy flight. We measure the scaling laws from the generalized fractal dimension and singularity spectrum for mitochondrial DNA walks for 35 different species. In particular, we report characteristic loop distributions for mammal mitochondrial genomes. PMID:15371639

  13. Gait Recognition Using Image Self-Similarity

    NASA Astrophysics Data System (ADS)

    BenAbdelkader, Chiraz; Cutler, Ross G.; Davis, Larry S.

    2004-12-01

    Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have sufficient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of difficulty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

  14. Accuracy of pointing a binaural listening array.

    PubMed

    Letowski, T R; Ricard, G L; Kalb, J T; Mermagen, T J; Amrein, K M

    1997-12-01

    We measured the accuracy with which sounds heard over a binaural, end-fire array could be located when the angular separation of the array's two arms was varied. Each individual arm contained nine cardioid electret microphones, the responses of which were combined to produce a unidirectional, band-limited pattern of sensitivity. We assessed the desirable angular separation of these arms by measuring the accuracy with which listeners could point to the source of a target sound presented against high-level background noise. We employed array separations of 30 degrees, 45 degrees, and 60 degrees, and signal-to-noise ratios of +5, -5, and -15 dB. Pointing accuracy was best for a separation of 60 degrees; this performance was indistinguishable from pointing during unaided listening conditions. In addition, the processing of the array was modeled to depict the information that was available for localization. The model indicates that highly directional binaural arrays can be expected to support accurate localization of sources of sound only near the axis of the array. Wider enhanced listening angles may be possible if the forward coverage of the sensor system is made less directional and more similar to that of human listeners. PMID:9473975

  15. Accuracy test procedure for image evaluation techniques.

    PubMed

    Jones, R A

    1968-01-01

    A procedure has been developed to determine the accuracy of image evaluation techniques. In the procedure, a target having orthogonal test arrays is photographed with a high quality optical system. During the exposure, the target is subjected to horizontal linear image motion. The modulation transfer functions of the images in the horizontal and vertical directions are obtained using the evaluation technique. Since all other degradations are symmetrical, the quotient of the two modulation transfer functions represents the modulation transfer function of the experimentally induced linear image motion. In an accurate experiment, any discrepancy between the experimental determination and the true value is due to inaccuracy in the image evaluation technique. The procedure was used to test the Perkin-Elmer automated edge gradient analysis technique over the spatial frequency range of 0-200 c/m. This experiment demonstrated that the edge gradient technique is accurate over this region and that the testing procedure can be controlled with the desired accuracy. Similarly, the test procedure can be used to determine the accuracy of other image evaluation techniques. PMID:20062421

  16. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  17. Accuracy of 25-hydroxyvitamin D assays: confronting the issues.

    PubMed

    Carter, Graham D

    2011-01-01

    Measurement of 25-hydroxyvitamin D (25-OHD) is widely used for assessing vitamin D status. There has been a dramatic increase in 25-OHD requests over recent years prompting many laboratories to consider the use of automated immunoassays. To achieve higher throughput, these methods have abandoned the traditional solvent extraction of samples and are therefore more prone to non-specific interference. The Vitamin D External Quality Assessment Scheme (DEQAS) has revealed method-related differences in 25-OHD results, raising concerns about the comparability and accuracy of different assays. This paper highlights some of the pre-analytical, analytical and post-analytical issues which may influence the accuracy of 25-OHD assays and interpretation of results. Recent attention has focused on reconciling the relatively high results given by liquid chromatography-tandem mass spectrometry (LC-MS/MS) to those of the DiaSorin radioimmunoassay (RIA) on which clinical decision points have previously been based. Data is presented on 20 DEQAS samples which were analysed by an LC-MS/MS assay developed as a candidate reference measurement procedure by the US National Institute of Standards and Technology (NIST). The NIST results were on average 11.2% lower than those given by routine LC-MS/MS methods. If confirmed, these results suggest that most routine LC-MS/MS assays are perhaps overestimating 25-OHD by failing to resolve a molecule having the same mass as 25-OHD(3) and a similar fragmentation pattern. All 25-OHD assays should be monitored by a proficiency testing scheme and the results made available to clinicians and editors of scientific journals. PMID:20795940

  18. Investigations of dipole localization accuracy in MEG using the bootstrap.

    PubMed

    Darvas, F; Rautiainen, M; Pantazis, D; Baillet, S; Benali, H; Mosher, J C; Garnero, L; Leahy, R M

    2005-04-01

    We describe the use of the nonparametric bootstrap to investigate the accuracy of current dipole localization from magnetoencephalography (MEG) studies of event-related neural activity. The bootstrap is well suited to the analysis of event-related MEG data since the experiments are repeated tens or even hundreds of times and averaged to achieve acceptable signal-to-noise ratios (SNRs). The set of repetitions or epochs can be viewed as a set of independent realizations of the brain's response to the experiment. Bootstrap resamples can be generated by sampling with replacement from these epochs and averaging. In this study, we applied the bootstrap resampling technique to MEG data from somatotopic experimental and simulated data. Four fingers of the right and left hand of a healthy subject were electrically stimulated, and about 400 trials per stimulation were recorded and averaged in order to measure the somatotopic mapping of the fingers in the S1 area of the brain. Based on single-trial recordings for each finger we performed 5000 bootstrap resamples. We reconstructed dipoles from these resampled averages using the Recursively Applied and Projected (RAP)-MUSIC source localization algorithm. We also performed a simulation for two dipolar sources with overlapping time courses embedded in realistic background brain activity generated using the prestimulus segments of the somatotopic data. To find correspondences between multiple sources in each bootstrap, sample dipoles with similar time series and forward fields were assumed to represent the same source. These dipoles were then clustered by a Gaussian Mixture Model (GMM) clustering algorithm using their combined normalized time series and topographies as feature vectors. The mean and standard deviation of the dipole position and the dipole time series in each cluster were computed to provide estimates of the accuracy of the reconstructed source locations and time series. PMID:15784414

  19. What Difference Reveals about Similarity

    ERIC Educational Resources Information Center

    Sagi, Eyal; Gentner, Dedre; Lovett, Andrew

    2012-01-01

    Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe "how" two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises…

  20. Similarity Measures for Comparing Biclusterings.

    PubMed

    Horta, Danilo; Campello, Ricardo J G B

    2014-01-01

    The comparison of ordinary partitions of a set of objects is well established in the clustering literature, which comprehends several studies on the analysis of the properties of similarity measures for comparing partitions. However, similarity measures for clusterings are not readily applicable to biclusterings, since each bicluster is a tuple of two sets (of rows and columns), whereas a cluster is only a single set (of rows). Some biclustering similarity measures have been defined as minor contributions in papers which primarily report on proposals and evaluation of biclustering algorithms or comparative analyses of biclustering algorithms. The consequence is that some desirable properties of such measures have been overlooked in the literature. We review 14 biclustering similarity measures. We define eight desirable properties of a biclustering measure, discuss their importance, and prove which properties each of the reviewed measures has. We show examples drawn and inspired from important studies in which several biclustering measures convey misleading evaluations due to the absence of one or more of the discussed properties. We also advocate the use of a more general comparison approach that is based on the idea of transforming the original problem of comparing biclusterings into an equivalent problem of comparing clustering partitions with overlapping clusters. PMID:26356865

  1. Mean Similarity Analysis Version 6

    EPA Science Inventory

    MEANSIM6 contains software for Mean Similarity Analysis, a method of assessing the strength of a classification of many objects (sites) into a relatively small number of groups. Classification strength is measured by the extent to which sites within the same groups are more simil...

  2. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Xu, Lu; Yang, Cheng-Hua; Wang, Qiang; Liu, Yue-Hao; Zhao, Yuan

    2015-12-01

    The range accuracy of three-dimensional (3D) ghost imaging is derived. Based on the derived range accuracy equation, the relationship between the slicing number and the range accuracy is analyzed and an optimum slicing number (OSN) is determined. According to the OSN, an improved 3D ghost imaging algorithm is proposed to increase the range accuracy. Experimental results indicate that the slicing number can affect the range accuracy significantly and the highest range accuracy can be achieved if the 3D ghost imaging system works with OSN. Project supported by the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 61108072).

  3. Comparing Science Achievement Constructs: Targeted and Achieved

    ERIC Educational Resources Information Center

    Ferrara, Steve; Duncan, Teresa

    2011-01-01

    This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…

  4. Current Concept of Geometrical Accuracy

    NASA Astrophysics Data System (ADS)

    Görög, Augustín; Görögová, Ingrid

    2014-06-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  5. Varieties of Achievement Motivation.

    ERIC Educational Resources Information Center

    Kukla, Andre; Scher, Hal

    1986-01-01

    A recent article by Nicholls on achievement motivation is criticized on three points: (1) definitions of achievement motives are ambiguous; (2) behavioral consequences predicted do not follow from explicit theoretical assumptions; and (3) Nicholls's account of the relation between his theory and other achievement theories is factually incorrect.…

  6. Motivation and School Achievement.

    ERIC Educational Resources Information Center

    Maehr, Martin L.; Archer, Jennifer

    Addressing the question, "What can be done to promote school achievement?", this paper summarizes the literature on motivation relating to classroom achievement and school effectiveness. Particular attention is given to how values, ideology, and various cultural patterns impinge on classroom performance and serve to enhance motivation to achieve.…

  7. Mobility and Reading Achievement.

    ERIC Educational Resources Information Center

    Waters, Theresa Z.

    A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…

  8. PASS and Reading Achievement.

    ERIC Educational Resources Information Center

    Kirby, John R.

    Two studies examined the effectiveness of the PASS (Planning, Attention, Simultaneous, and Successive cognitive processes) theory of intelligence in predicting reading achievement scores of normally achieving children and distinguishing children with reading disabilities from normally achieving children. The first study dealt with predicting…

  9. Does language about similarity play a role in fostering similarity comparison in children?

    PubMed Central

    Özçalışkan, Şeyda; Goldin-Meadow, Susan; Gentner, Dedre; Mylander, Carolyn

    2009-01-01

    Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input—deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (POINT TO CAT+POINT TO TIGER) resembling those conveyed by 40 hearing children in early gesture+speech combinations (cat+POINT TO TIGER). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations. PMID:19524220

  10. Prediction accuracy of a sample-size estimation method for ROC studies

    PubMed Central

    Chakraborty, Dev P.

    2010-01-01

    Rationale and Objectives Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method. Materials and Methods Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in 2 modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases) and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the range 75% to 90% was used to assess the HB method. Results For random-case generalization the HB-method prediction-accuracy was reasonable, ~ 50% for 5 readers in the pilot study. Prediction-accuracy was generally higher under low reader variability conditions (LH) than under high reader variability conditions (HL). Under ideal conditions (many readers in the pilot study) the DBM-MRMC based HB method overestimated the number of cases. The overestimates could be explained by the observed large variability of the DBM-MRMC modality-reader variance estimates, particularly when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy > 50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was ~ 36% for 15

  11. Modulating the phonological similarity effect: the contribution of interlist similarity and lexicality.

    PubMed

    Karlsen, Paul Johan; Lian, Arild

    2005-04-01

    The classical phonological similarity effect (PSE) was studied with words and nonwords in two immediate serial recall (ISR) tasks. The relative contributions of intralist and interlist interference were compared, and differential effects on item and order memory were observed. PSE occurred with words and was reversed with nonwords. In addition, PSE was modulated by interlist similarity, which enhanced recall of rhyme items and impaired recall of distinct items. Finally, interlist similarity reduced item recall of words, whereas it improved serial recall of nonwords. The latter finding rules out the hypothesis that the reverse PSE for nonwords is due to interlist interference. It is concluded that two opposing effects of phonological intralist similarity cause the interaction between PSE and lexicality in ISR. With words, the positive effect on item recall is usually masked by a much more disruptive effect on position accuracy. With nonwords, however, the positive effect often masks the negative one. These findings are discussed in relation to current models of verbal short-term memory. PMID:16156188

  12. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  13. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  14. ACCURACY AND TRACE ORGANIC ANALYSES

    EPA Science Inventory

    Accuracy in trace organic analysis presents a formidable problem to the residue chemist. He is confronted with the analysis of a large number and variety of compounds present in a multiplicity of substrates at levels as low as parts-per-trillion. At these levels, collection, isol...

  15. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052

  16. Psychology Textbooks: Examining Their Accuracy

    ERIC Educational Resources Information Center

    Steuer, Faye B.; Ham, K. Whitfield, II

    2008-01-01

    Sales figures and recollections of psychologists indicate textbooks play a central role in psychology students' education, yet instructors typically must select texts under time pressure and with incomplete information. Although selection aids are available, none adequately address the accuracy of texts. We describe a technique for sampling…

  17. A Survey of Reading Achievement in a Secondary School Population.

    ERIC Educational Resources Information Center

    Moodie, Allan G.

    Reading achievement of students from open plan and traditional elementary classes were compared in three areas; speed and accuracy, vocabulary, and comprehension. The first evaluation (Grade 7) indicated that the mean score of the speed and accuracy scale was significantly lower for "open-area" students than for traditional pupils. Score…

  18. Teachers' Judgements of Students' Foreign-Language Achievement

    ERIC Educational Resources Information Center

    Zhu, Mingjing; Urhahne, Detlef

    2015-01-01

    Numerous studies have been conducted on the accuracy of teacher judgement in different educational areas such as mathematics, language arts and reading. Teacher judgement of students' foreign-language achievement, however, has been rarely investigated. The study aimed to examine the accuracy of teacher judgement of students' foreign-language…

  19. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  20. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  1. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    .22 (Bayes Cπ) to 0.25 (Bayes B). When preselecting SNP based on GWAS results, the highest correlation (0.27) between WBSF and the genomic breeding value was achieved using the Bayesian LASSO model with 15,030 (3%) markers. Although this study used relatively few animals, the design of the segregating population ensured wide genetic variability for meat tenderness, which was important to achieve acceptable accuracy of genomic prediction. Although all models showed similar levels of prediction accuracy, some small advantages were observed with the Bayes B approach when higher numbers of markers were preselected based on their -values resulting from a GWAS analysis. PMID:27482662

  2. Improving the Accuracy of Self-Corrected Mathematics Homework.

    ERIC Educational Resources Information Center

    Miller, Tracy L.; And Others

    1993-01-01

    A study investigated the effect of reward on sixth graders' self-correction inaccuracy and monitored the effect of improved self-correction on mathematics homework achievement. There was a low baseline rate of student self-correction inaccuracy. Offering a reward for improved accuracy caused the rate of inaccuracy to decrease significantly.…

  3. Improving Homework Accuracy: Interdependent Group Contingencies and Randomized Components

    ERIC Educational Resources Information Center

    Reinhardt, Danielle; Theodore, Lea A.; Bray, Melissa A.; Kehle, Thomas J.

    2009-01-01

    Homework is an often employed teaching strategy that has strong positive effects on academic achievement across grade levels, content areas, and student ability levels. To maximize academic learning, accuracy of homework should be addressed. The present investigation employed a multiple-baseline design across academic behaviors to examine the…

  4. Mechanisms for similarity based cooperation

    NASA Astrophysics Data System (ADS)

    Traulsen, A.

    2008-06-01

    Cooperation based on similarity has been discussed since Richard Dawkins introduced the term “green beard” effect. In these models, individuals cooperate based on an aribtrary signal (or tag) such as the famous green beard. Here, two different models for such tag based cooperation are analysed. As neutral drift is important in both models, a finite population framework is applied. The first model, which we term “cooperative tags” considers a situation in which groups of cooperators are formed by some joint signal. Defectors adopting the signal and exploiting the group can lead to a breakdown of cooperation. In this case, conditions are derived under which the average abundance of the more cooperative strategy exceeds 50%. The second model considers a situation in which individuals start defecting towards others that are not similar to them. This situation is termed “defective tags”. It is shown that in this case, individuals using tags to cooperate exclusively with their own kind dominate over unconditional cooperators.

  5. Semantically enabled image similarity search

    NASA Astrophysics Data System (ADS)

    Casterline, May V.; Emerick, Timothy; Sadeghi, Kolia; Gosse, C. A.; Bartlett, Brent; Casey, Jason

    2015-05-01

    Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.

  6. Diversity and similarity of comets

    NASA Technical Reports Server (NTRS)

    Delsemme, A. H.

    1987-01-01

    The evolution of comets from an early stage where they were all similar, to the later diversity is reviewed. The elemental abundances of all pristine comets are likely to be primitive, that is in solar abundance ratios for all elements including C, N, O, S, but with the exception of H (and assumedly He and Ne) that are severely depleted. The solid phase was originally in very fine grains, typically 0.1 micron or less, eventually sintered into larger grain clusters. The volatile phase contains H, C, N, O, and S molecules frozen in the pores of the grain clusters; cosmic ray plus solar irradiation changes the volatile to refractory ratio of the crust. Differences in dust tails, in plasma tails, in photometry between young and old comets, and in the variable carbon depletion of the gas phase seem to be induced by the decay processes.

  7. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  8. Parenting and adolescents' accuracy in perceiving parental values.

    PubMed

    Knafo, Ariel; Schwartz, Shalom H

    2003-01-01

    What determines adolescents' accuracy in perceiving parental values? The current study examined potential predictors including parental value communication, family value agreement, and parenting styles. In the study, 547 Israeli adolescents (aged 16 to 18) of diverse socioeconomic backgrounds participated with their parents. Adolescents reported the values they perceive their parents want them to hold. Parents reported their socialization values. Accuracy in perceiving parents' overall value system correlated positively with parents' actual and perceived value agreement and perceived parental warmth and responsiveness, but negatively with perceived value conflict, indifferent parenting, and autocratic parenting in all gender compositions of parent-child dyads. Other associations varied by dyad type. Findings were similar for predicting accuracy in perceiving two specific values: tradition and hedonism. The article discusses implications for the processes that underlie accurate perception, gender differences, and other potential influences on accuracy in value perception. PMID:12705575

  9. Heritability of Creative Achievement

    ERIC Educational Resources Information Center

    Piffer, Davide; Hur, Yoon-Mi

    2014-01-01

    Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…

  10. Confronting the Achievement Gap

    ERIC Educational Resources Information Center

    Gardner, David

    2007-01-01

    This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…

  11. States Address Achievement Gaps.

    ERIC Educational Resources Information Center

    Christie, Kathy

    2002-01-01

    Summarizes 2 state initiatives to address the achievement gap: North Carolina's report by the Advisory Commission on Raising Achievement and Closing Gaps, containing an 11-point strategy, and Kentucky's legislation putting in place 10 specific processes. The North Carolina report is available at www.dpi.state.nc.us.closingthegap; Kentucky's…

  12. Wechsler Individual Achievement Test.

    ERIC Educational Resources Information Center

    Taylor, Ronald L.

    1999-01-01

    This article describes the Wechsler Individual Achievement Test, a comprehensive measure of achievement for individuals in grades K-12. Eight subtests assess mathematics reasoning, spelling, reading comprehension, numerical operations, listening comprehension, oral expression, and written expression. Its administration, standardization,…

  13. Inverting the Achievement Pyramid

    ERIC Educational Resources Information Center

    White-Hood, Marian; Shindel, Melissa

    2006-01-01

    Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…

  14. Achievement Test Program.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.

    The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…

  15. General Achievement Trends: Maryland

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  16. General Achievement Trends: Arkansas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  17. General Achievement Trends: Idaho

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  18. General Achievement Trends: Nebraska

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  19. General Achievement Trends: Colorado

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  20. General Achievement Trends: Iowa

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  1. General Achievement Trends: Hawaii

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  2. General Achievement Trends: Kentucky

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  3. General Achievement Trends: Florida

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  4. General Achievement Trends: Texas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  5. General Achievement Trends: Oregon

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  6. General Achievement Trends: Virginia

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  7. Honoring Student Achievement

    ERIC Educational Resources Information Center

    Education Digest: Essential Readings Condensed for Quick Review, 2004

    2004-01-01

    Is the concept of "honor roll" obsolete? The honor roll has always been a way for schools to recognize the academic achievement of their students. But does it motivate students? In this article, several elementary school principals share their views about honoring student achievement. Among others, Virginia principal Nancy Moga said that students…

  8. Aiming at Achievement.

    ERIC Educational Resources Information Center

    Martinez, Paul

    The Raising Quality and Achievement Program is a 3-year initiative to support further education (FE) colleges in the United Kingdom in their drive to improve students' achievement and the quality of provision. The program offers the following: (1) quality information and advice; (2) onsite support for individual colleges; (3) help with…

  9. Achieving Perspective Transformation.

    ERIC Educational Resources Information Center

    Nowak, Jens

    Perspective transformation is a consciously achieved state in which the individual's perspective on life is transformed. The new perspective serves as a vantage point for life's actions and interactions, affecting the way life is lived. Three conditions are basic to achieving perspective transformation: (1) "feeling" experience, i.e., getting in…

  10. Achieving Public Schools

    ERIC Educational Resources Information Center

    Abowitz, Kathleen Knight

    2011-01-01

    Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…

  11. General Achievement Trends: Tennessee

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  12. Achievement-Based Resourcing.

    ERIC Educational Resources Information Center

    Fletcher, Mike; And Others

    1992-01-01

    This collection of seven articles examines achievement-based resourcing (ABR), the concept that the funding of educational institutions should be linked to their success in promoting student achievement, with a focus on the application of ABR to postsecondary education in the United Kingdom. The articles include: (1) "Introduction" (Mick…

  13. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  14. Mathematical Thinking of Kindergarten Boys and Girls: Similar Achievement, Different Contributing Processes

    ERIC Educational Resources Information Center

    Klein, Pnina S.; Adi-Japha, Esther; Hakak-Benizri, Simcha

    2010-01-01

    The objective of this study was to examine gender differences in the relations between verbal, spatial, mathematics, and teacher-child mathematics interaction variables. Kindergarten children (N = 80) were videotaped playing games that require mathematical reasoning in the presence of their teachers. The children's mathematics, spatial, and verbal…

  15. The application of similar image retrieval in electronic commerce.

    PubMed

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  16. The Application of Similar Image Retrieval in Electronic Commerce

    PubMed Central

    Hu, YuPing; Yin, Hua; Han, Dezhi; Yu, Fei

    2014-01-01

    Traditional online shopping platform (OSP), which searches product information by keywords, faces three problems: indirect search mode, large search space, and inaccuracy in search results. For solving these problems, we discuss and research the application of similar image retrieval in electronic commerce. Aiming at improving the network customers' experience and providing merchants with the accuracy of advertising, we design a reasonable and extensive electronic commerce application system, which includes three subsystems: image search display subsystem, image search subsystem, and product information collecting subsystem. This system can provide seamless connection between information platform and OSP, on which consumers can automatically and directly search similar images according to the pictures from information platform. At the same time, it can be used to provide accuracy of internet marketing for enterprises. The experiment shows the efficiency of constructing the system. PMID:24883411

  17. COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy

    NASA Astrophysics Data System (ADS)

    Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun

    2014-09-01

    In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.

  18. Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.

    PubMed

    Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan

    2015-11-01

    Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. PMID:26343165

  19. [Achievement of therapeutic objectives].

    PubMed

    Mantilla, Teresa

    2014-07-01

    Therapeutic objectives for patients with atherogenic dyslipidemia are achieved by improving patient compliance and adherence. Clinical practice guidelines address the importance of treatment compliance for achieving objectives. The combination of a fixed dose of pravastatin and fenofibrate increases the adherence by simplifying the drug regimen and reducing the number of daily doses. The good tolerance, the cost of the combination and the possibility of adjusting the administration to the patient's lifestyle helps achieve the objectives for these patients with high cardiovascular risk. PMID:25043543

  20. SSVEP extraction based on the similarity of background EEG.

    PubMed

    Wu, Zhenghua

    2014-01-01

    Steady-state Visual Evoked Potential (SSVEP) outperforms the other types of ERPs for Brain-computer Interface (BCI), and thus it is widely employed. In order to apply SSVEP-based BCI to real life situations, it is important to improve the accuracy and transfer rate of the system. Aimed at this target, many SSVEP extraction methods have been proposed. All these methods are based directly on the properties of SSVEP, such as power and phase. In this study, we first filtered out the target frequencies from the original EEG to get a new signal and then computed the similarity between the original EEG and the new signal. Based on this similarity, SSVEP in the original EEG can be identified. This method is referred to as SOB (Similarity of Background). The SOB method is used to detect SSVEP in 1s-length and 3s-length EEG segments respectively. The accuracy of detection is compared with its peers computed by the widely-used Power Spectrum (PS) method and the Canonical Coefficient (CC) method. The comparison results illustrate that the SOB method can lead to a higher accuracy than the PS method and CC method when detecting a short period SSVEP signal. PMID:24709951

  1. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  2. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware. PMID:24524158

  3. Predicting Achievement and Motivation.

    ERIC Educational Resources Information Center

    Uguroglu, Margaret; Walberg, Herbert J.

    1986-01-01

    Motivation and nine other factors were measured for 970 students in grades five through eight in a study of factors predicting achievement and predicting motivation. Results are discussed. (Author/MT)

  4. Attractiveness and School Achievement

    ERIC Educational Resources Information Center

    Salvia, John; And Others

    1977-01-01

    The purpose of this study was to ascertain the relationship between rated attractiveness and two measures of school performance. Attractive children received significantly higher report cards and, to some degree, higher achievement test scores than their unattractive peers. (Author)

  5. Student Achievement and Motivation

    ERIC Educational Resources Information Center

    Flammer, Gordon H.; Mecham, Robert C.

    1974-01-01

    Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)

  6. Adapting Document Similarity Measures for Ligand-Based Virtual Screening.

    PubMed

    Himmat, Mubarak; Salim, Naomie; Al-Dabbagh, Mohammed Mumtaz; Saeed, Faisal; Ahmed, Ali

    2016-01-01

    Quantifying the similarity of molecules is considered one of the major tasks in virtual screening. There are many similarity measures that have been proposed for this purpose, some of which have been derived from document and text retrieving areas as most often these similarity methods give good results in document retrieval and can achieve good results in virtual screening. In this work, we propose a similarity measure for ligand-based virtual screening, which has been derived from a text processing similarity measure. It has been adopted to be suitable for virtual screening; we called this proposed measure the Adapted Similarity Measure of Text Processing (ASMTP). For evaluating and testing the proposed ASMTP we conducted several experiments on two different benchmark datasets: the Maximum Unbiased Validation (MUV) and the MDL Drug Data Report (MDDR). The experiments have been conducted by choosing 10 reference structures from each class randomly as queries and evaluate them in the recall of cut-offs at 1% and 5%. The overall obtained results are compared with some similarity methods including the Tanimoto coefficient, which are considered to be the conventional and standard similarity coefficients for fingerprint-based similarity calculations. The achieved results show that the performance of ligand-based virtual screening is better and outperforms the Tanimoto coefficients and other methods. PMID:27089312

  7. Tracking accuracy for Leosat-Geosat laser links

    NASA Astrophysics Data System (ADS)

    Seshamani, Ramani; Rao, D. V. B.; Alex, T. K.; Jain, Y. K.

    1989-06-01

    A tracking accuracy of 1 microrad is required for the achievement of Leosat-Geosat laser communications links, entailing exceptionally accurate alignment between transmitter and receiver as well as point-ahead capability. The pointing and acquisition procedure would involve the two optical system/telescope units to be pointed toward each other with an attitude accuracy smaller than the position uncertainty; a spatial-scam operation by the Leosat's narrow beam, and subsequently by the Geosat's would have to be conducted before acquisition is completed, allowing switching from acquisition to tracking mode.

  8. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-01-01

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  9. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  10. Accuracy Advances in Measuring Earth Emission Spectra for Weather and Climate

    NASA Astrophysics Data System (ADS)

    Revercomb, H. E.; Best, F. A.; Tobin, D. C.; Knuteson, R. O.; Taylor, J. K.; Gero, P.; Adler, D. P.; Pettersen, C.; Mulligan, M.

    2011-12-01

    Launch of the first component of the Joint Polar Satellite System (JPSS) in late October is expected to initiate a new series of US afternoon satellites to complement the EUMETSAT MetOp EPS morning observations. A key component is the Cross-track Infrared Sounder (CrIS) designed for advanced temperature and water vapor profiling for weather and climate applications. We have worked on getting this operational capability in space ever since conducting a Phase A instrument design in 1990, and will report on what is expected to be its highly accurate radiometric and spectral performance post launch. The expectation from thermal/vacuum testing is that the accuracy will exceed 0.2 K (k=3) brightness temperature at scene temperature for all three bands in the region from 3.5 to 15 microns. CrIS is expected to offer further confirmation of techniques that have proven to offer significant accuracy improvements for the new family of advanced sounding instruments including AIRS on NASA Aqua platform and IASI on MetOp A and that are needed in the new IR Decadal Survey measurements. CrIS and these other advanced sounders help set the stage for a new era in establishing spectrally resolved IR climate benchmark measurements from space. Here we report on being able to achieve even higher accuracy with instruments designed specifically for climate missions similar to the Decadal Survey Climate Absolute Radiance and Refractivity Observatory (CLARREO). Results will be presented from our NASA Instrument Incubator Program (IIP) effort for which a new concept for on-orbit verification and test has been developed. This system is capable of performing fundamental radiometric calibration, spectral characterization and calibration, and other key performance tests that are normally only performed prior to launch in thermal/vacuum testing. By verifying accuracy directly on-orbit, this capability should provide the ultra-high confidence in data sets needed for societal decision making.

  11. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  12. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  13. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  14. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  15. Accuracy of numerically produced compensators.

    PubMed

    Thompson, H; Evans, M D; Fallone, B G

    1999-01-01

    A feasibility study is performed to assess the utility of a computer numerically controlled (CNC) mill to produce compensating filters for conventional clinical use and for the delivery of intensity-modulated beams. A computer aided machining (CAM) software is used to assist in the design and construction of such filters. Geometric measurements of stepped and wedged surfaces are made to examine the accuracy of surface milling. Molds are milled and filled with molten alloy to produce filters, and both the molds and filters are examined for consistency and accuracy. Results show that the deviation of the filter surfaces from design does not exceed 1.5%. The effective attenuation coefficient is measured for CadFree, a cadmium-free alloy, in a 6 MV photon beam. The effective attenuation coefficients at the depth of maximum dose (1.5 cm) and at 10 cm in solid water phantom are found to be 0.546 cm-1 and 0.522 cm-1, respectively. Further attenuation measurements are made with Cerrobend to assess the variations of the effective attenuation coefficient with field size and source-surface distance. The ability of the CNC mill to accurately produce surfaces is verified with dose profile measurements in a 6 MV photon beam. The test phantom is composed of a 10 degrees polystyrene wedge and a 30 degrees polystyrene wedge, presenting both a sharp discontinuity and sloped surfaces. Dose profiles, measured at the depth of compensation (10 cm) beneath the test phantom and beneath a flat phantom, are compared to those produced by a commercial treatment planning system. Agreement between measured and predicted profiles is within 2%, indicating the viability of the system for filter production. PMID:10100166

  16. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  17. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively. PMID:21033925

  18. The predictive accuracy of intertemporal-choice models.

    PubMed

    Arfer, Kodi B; Luhmann, Christian C

    2015-05-01

    How do people choose between a smaller reward available sooner and a larger reward available later? Past research has evaluated models of intertemporal choice by measuring goodness of fit or identifying which decision-making anomalies they can accommodate. An alternative criterion for model quality, which is partly antithetical to these standard criteria, is predictive accuracy. We used cross-validation to examine how well 10 models of intertemporal choice could predict behaviour in a 100-trial binary-decision task. Many models achieved the apparent ceiling of 85% accuracy, even with smaller training sets. When noise was added to the training set, however, a simple logistic-regression model we call the difference model performed particularly well. In many situations, between-model differences in predictive accuracy may be small, contrary to long-standing controversy over the modelling question in research on intertemporal choice, but the simplicity and robustness of the difference model recommend it to future use. PMID:25773127

  19. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  20. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  1. Awareness of Peers' Judgments of Oneself: Accuracy and Process of Metaperception

    ERIC Educational Resources Information Center

    Malloy, Thomas E.; Albright, Linda; Scarpati, Stan

    2007-01-01

    This research focused on children's awareness of peers' social judgments of them, age differences in accuracy attained, and the process by which accuracy is achieved. Children were accurately aware of peers' perceptions of them on behavioral, social status, and ability dimensions in Grades 1 through 6. Older children were more accurate than…

  2. Explorations in achievement motivation

    NASA Technical Reports Server (NTRS)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  3. Achieving health care affordability.

    PubMed

    Payson, Norman C

    2002-10-01

    Not all plans are jumping headlong into the consumer-centric arena. In this article, the CEO of Oxford Health Plans discusses how advanced managed care can achieve what other consumer-centric programs seek to do--provide affordable, quality health care. PMID:12391815

  4. Issues in Achievement Testing.

    ERIC Educational Resources Information Center

    Baker, Eva L.

    This booklet is intended to help school personnel, parents, students, and members of the community understand concepts and research relating to achievement testing in public schools. The paper's sections include: (1) test use with direct effects on students (test of certification, selection, and placement); (2) test use with indirect effects on…

  5. Achieving Peace through Education.

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…

  6. Achieving All Our Ambitions

    ERIC Educational Resources Information Center

    Hartley, Tricia

    2009-01-01

    National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…

  7. Intelligence and Educational Achievement

    ERIC Educational Resources Information Center

    Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres

    2007-01-01

    This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…

  8. SALT and Spelling Achievement.

    ERIC Educational Resources Information Center

    Nelson, Joan

    A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…

  9. NCLB: Achievement Robin Hood?

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the potential "Robin Hood…

  10. INTELLIGENCE, PERSONALITY AND ACHIEVEMENT.

    ERIC Educational Resources Information Center

    MUIR, R.C.; AND OTHERS

    A LONGITUDINAL DEVELOPMENTAL STUDY OF A GROUP OF MIDDLE CLASS CHILDREN IS DESCRIBED, WITH EMPHASIS ON A SEGMENT OF THE RESEARCH INVESTIGATING THE RELATIONSHIP OF ACHIEVEMENT, INTELLIGENCE, AND EMOTIONAL DISTURBANCE. THE SUBJECTS WERE 105 CHILDREN AGED FIVE TO 6.3 ATTENDING TWO SCHOOLS IN MONTREAL. EACH CHILD WAS ASSESSED IN THE AREAS OF…

  11. School Students' Science Achievement

    ERIC Educational Resources Information Center

    Shymansky, James; Wang, Tzu-Ling; Annetta, Leonard; Everett, Susan; Yore, Larry D.

    2013-01-01

    This paper is a report of the impact of an externally funded, multiyear systemic reform project on students' science achievement on a modified version of the Third International Mathematics and Science Study (TIMSS) test in 33 small, rural school districts in two Midwest states. The systemic reform effort utilized a cascading leadership strategy…

  12. Advancing Student Achievement

    ERIC Educational Resources Information Center

    Walberg, Herbert J.

    2010-01-01

    For the last half century, higher spending and many modern reforms have failed to raise the achievement of students in the United States to the levels of other economically advanced countries. A possible explanation, says Herbert Walberg, is that much current education theory is ill informed about scientific psychology, often drawing on fads and…

  13. Essays on Educational Achievement

    ERIC Educational Resources Information Center

    Ampaabeng, Samuel Kofi

    2013-01-01

    This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…

  14. Increasing Male Academic Achievement

    ERIC Educational Resources Information Center

    Jackson, Barbara Talbert

    2008-01-01

    The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…

  15. Setting and Achieving Objectives.

    ERIC Educational Resources Information Center

    Knoop, Robert

    1986-01-01

    Provides basic guidelines which school officials and school boards may find helpful in negotiating, establishing, and managing objectives. Discusses characteristics of good objectives, specific and directional objectives, multiple objectives, participation in setting objectives, feedback on goal process and achievement, and managing a school…

  16. Schools Achieving Gender Equity.

    ERIC Educational Resources Information Center

    Revis, Emma

    This guide is designed to assist teachers presenting the Schools Achieving Gender Equity (SAGE) curriculum for vocational education students, which was developed to align gender equity concepts with the Kentucky Education Reform Act (KERA). Included in the guide are lesson plans for classes on the following topics: legal issues of gender equity,…

  17. Iowa Women of Achievement.

    ERIC Educational Resources Information Center

    Ohrn, Deborah Gore, Ed.

    1993-01-01

    This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…

  18. Achievements or Disasters?

    ERIC Educational Resources Information Center

    Goodwin, MacArthur

    2000-01-01

    Focuses on policy issues that have affected arts education in the twentieth century, such as: interest in discipline-based arts education, influence of national arts associations, and national standards and coordinated assessment. States that whether the policy decisions are viewed as achievements or disasters are for future determination. (CMK)

  19. Minority Achievement Report.

    ERIC Educational Resources Information Center

    Prince George's Community Coll., Largo, MD. Office of Institutional Research and Analysis.

    This report summarizes the achievements of Prince George's Community College (PGCC) with regard to minority outcomes. Table 1 summarizes the undergraduate enrollment trends for African Americans as well as total minorities from fall 1994 through fall 1998. Both the headcount number of African American students and the proportion of African…

  20. Appraising Reading Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…

  1. Empathic accuracy for happiness in the daily lives of older couples: Fluid cognitive performance predicts pattern accuracy among men.

    PubMed

    Hülür, Gizem; Hoppmann, Christiane A; Rauers, Antje; Schade, Hannah; Ram, Nilam; Gerstorf, Denis

    2016-08-01

    Correctly identifying other's emotional states is a central cognitive component of empathy. We examined the role of fluid cognitive performance for empathic accuracy for happiness in the daily lives of 86 older couples (mean relationship length = 45 years; mean age = 75 years) on up to 42 occasions over 7 consecutive days. Men performing better on the Digit Symbol test were more accurate in identifying ups and downs of their partner's happiness. A similar association was not found for women. We discuss the potential role of fluid cognitive performance and other individual, partner, and situation characteristics for empathic accuracy. (PsycINFO Database Record PMID:27362351

  2. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  3. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information.

    PubMed

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  4. Dependency Similarity, Attraction and Perceived Happiness.

    ERIC Educational Resources Information Center

    Pandey, Janak

    1978-01-01

    Subjects were asked to evaluate either a similar personality or a dissimilar personality. Subjects rated similar others more positively than dissimilar others and, additionally, perceived similar others as more helpful and sympathetic than dissimilar others. (Author)

  5. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  6. On Achieving Experimental Accuracy from Molecular Dynamics Simulations of Flexible Molecules: Aqueous Glycerol

    PubMed Central

    Yongye, Austin B.; Foley, B. Lachele; Woods, Robert J.

    2014-01-01

    The rotational isomeric states (RIS) of glycerol at infinite dilution have been characterized in the aqueous phase via a 1 μs conventional molecular dynamics (MD) simulation, a 40 ns enhanced sampling replica exchange molecular dynamics (REMD) simulation, and a reevaluation of the experimental NMR data. The MD and REMD simulations employed the GLYCAM06/AMBER force field with explicit treatment of solvation. The shorter time scale of the REMD sampling method gave rise to RIS and theoretical scalar 3JHH coupling constants that were comparable to those from the much longer traditional MD simulation. The 3JHH coupling constants computed from the MD methods were in excellent agreement with those observed experimentally. Despite the agreement between the computed and the experimental J-values, there were variations between the rotamer populations computed directly from the MD data and those derived from the experimental NMR data. The experimentally derived populations were determined utilizing limiting J-values from an analysis of NMR data from substituted ethane molecules and may not be completely appropriate for application in more complex molecules, such as glycerol. Here, new limiting J-values have been derived via a combined MD and quantum mechanical approach and were used to decompose the experimental 3JHH coupling constants into population distributions for the glycerol RIS. PMID:18311953

  7. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  8. Achieving Higher Accuracy in the Gamma-Ray Spectrocopic Assay of Holdup

    SciTech Connect

    Russo, P.A.; Wenz, T.R.; Smith, S.E.; Harris, J.F.

    2000-09-01

    Gamma-ray spectroscopy is an important technique for the measurement of quantities of nuclear material holdup in processing equipment. Because the equipment in large facilities dedicated to uranium isotopic enrichment, uranium/plutonium scrap recovery or various stages of fuel fabrication is extensive, the total holdup may be large by its distribution alone, even if deposit thicknesses are small. Good accountability practices require unbiased measurements with uncertainties that are as small as possible. This paper describes new procedures for use with traditional holdup analysis methods based on gamma-ray spectroscopy. The procedures address the two sources of bias inherent in traditional gamma-ray measurements of holdup. Holdup measurements are performed with collimated, shielded gamma-ray detectors. The measurement distance is chosen to simplify the deposit geometry to that of a point, line or area. The quantitative holdup result is based on the net count rate of a representative gamma ray. This rate is corrected for contributions from room background and for attenuation by the process equipment. Traditional holdup measurements assume that the width of the point or line deposit is very small compared to the measurement distance, and that the self-attenuation effects can be neglected. Because each point or line deposit has a finite width and because self-attenuation affects all measurements, bias is incurred in both assumptions. In both cases the bias is negative, explaining the systematically low results of gamma-ray holdup measurements. The new procedures correct for bias that arises from both the finite-source effects and the gamma-ray self-attenuation. The procedures used to correct for both of these effects apply to the generalized geometries. One common empirical parameter is used for both corrections. It self-consistently limits the total error incurred (from uncertain knowledge of this parameter) in the combined correction process, so that it is compelling to use these procedures. The algorithms and the procedures are simple, general, and easily automated for use plant-wide. This paper shows the derivation of the new, generalized correction algorithms for finite-source and self-attenuation effects. It also presents an analysis of the sensitivity of the holdup result to the uncertainty in the empirical parameter when one or both corrections are made. The paper uses specific examples of the magnitudes of finite-source and self-attenuation corrections to measurements that were made in the field. It discusses the automated implementation of the correction procedure.

  9. Design of a low-density SNP chip for the main Australian sheep breeds and its effect on imputation and genomic prediction accuracy.

    PubMed

    Bolormaa, S; Gore, K; van der Werf, J H J; Hayes, B J; Daetwyler, H D

    2015-10-01

    Genotyping sheep for genome-wide SNPs at lower density and imputing to a higher density would enable cost-effective implementation of genomic selection, provided imputation was accurate enough. Here, we describe the design of a low-density (12k) SNP chip and evaluate the accuracy of imputation from the 12k SNP genotypes to 50k SNP genotypes in the major Australian sheep breeds. In addition, the impact of imperfect imputation on genomic predictions was evaluated by comparing the accuracy of genomic predictions for 15 novel meat traits including carcass and meat quality and omega fatty acid traits in sheep, from 12k SNP genotypes, imputed 50k SNP genotypes and real 50k SNP genotypes. The 12k chip design included 12 223 SNPs with a high minor allele frequency that were selected with intermarker spacing of 50-475 kb. SNPs for parentage and horned or polled tests also were represented. Chromosome ends were enriched with SNPs to reduce edge effects on imputation. The imputation performance of the 12k SNP chip was evaluated using 50k SNP genotypes of 4642 animals from six breeds in three different scenarios: (1) within breed, (2) single breed from multibreed reference and (3) multibreed from a single-breed reference. The highest imputation accuracies were found with scenario 2, whereas scenario 3 was the worst, as expected. Using scenario 2, the average imputation accuracy in Border Leicester, Polled Dorset, Merino, White Suffolk and crosses was 0.95, 0.95, 0.92, 0.91 and 0.93 respectively. Imputation scenario 2 was used to impute 50k genotypes for 10 396 animals with novel meat trait phenotypes to compare genomic prediction accuracy using genomic best linear unbiased prediction (GBLUP) with real and imputed 50k genotypes. The weighted mean imputation accuracy achieved was 0.92. The average accuracy of genomic estimated breeding values (GEBVs) based on only 12k data was 0.08 across traits and breeds, but accuracies varied widely. The mean GBLUP accuracies with imputed

  10. A hierarchical algorithm for molecular similarity (H-FORMS).

    PubMed

    Ramirez-Manzanares, Alonso; Peña, Joaquin; Azpiroz, Jon M; Merino, Gabriel

    2015-07-15

    A new hierarchical method to determine molecular similarity is introduced. The goal of this method is to detect if a pair of molecules has the same structure by estimating a rigid transformation that aligns the molecules and a correspondence function that matches their atoms. The algorithm firstly detect similarity based on the global spatial structure. If this analysis is not sufficient, the algorithm computes novel local structural rotation-invariant descriptors for the atom neighborhood and uses this information to match atoms. Two strategies (deterministic and stochastic) on the matching based alignment computation are tested. As a result, the atom-matching based on local similarity indexes decreases the number of testing trials and significantly reduces the dimensionality of the Hungarian assignation problem. The experiments on well-known datasets show that our proposal outperforms state-of-the-art methods in terms of the required computational time and accuracy. PMID:26037060

  11. Similarity constraints in testing of cooled engine parts

    NASA Technical Reports Server (NTRS)

    Colladay, R. S.; Stepka, F. S.

    1974-01-01

    A study is made of the effect of testing cooled parts of current and advanced gas turbine engines at the reduced temperature and pressure conditions which maintain similarity with the engine environment. Some of the problems facing the experimentalist in evaluating heat transfer and aerodynamic performance when hardware is tested at conditions other than the actual engine environment are considered. Low temperature and pressure test environments can simulate the performance of actual size prototype engine hardware within the tolerance of experimental accuracy if appropriate similarity conditions are satisfied. Failure to adhere to these similarity constraints because of test facility limitations or other reasons, can result in a number of serious errors in projecting the performance of test hardware to engine conditions.

  12. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  13. Propagation, structural similarity, and image quality

    NASA Astrophysics Data System (ADS)

    Pérez, Jorge; Mas, David; Espinosa, Julián; Vázquez, Carmen; Illueca, Carlos

    2012-06-01

    Retinal image quality is usually analysed through different parameters typical from instrumental optics, i.e, PSF, MTF and wavefront aberrations. Although these parameters are important, they are hard to translate to visual quality parameters since human vision exhibits some tolerance to certain aberrations. This is particularly important in postsurgery eyes, where non-common aberration are induced and their effects on the final image quality is not clear. Natural images usually show a strong dependency between one point and its neighbourhood. This fact helps to the image interpretation and should be considered when determining the final image quality. The aim of this work is to propose an objective index which allows comparing natural images on the retina and, from them, to obtain relevant information abut the visual quality of a particular subject. To this end, we propose a individual eye modelling. The morphological data of the subject's eye are considered and the light propagation through the ocular media is calculated by means of a Fourier-transform-based method. The retinal PSF so obtained is convolved with the natural scene under consideration and the obtained image is compared with the ideal one by using the structural similarity index. The technique is applied on 2 eyes with a multifocal corneal profile (PresbyLasik) and can be used to determine the real extension of the achieved pseudoaccomodation.

  14. DDE Transposases: Structural Similarity and Diversity

    PubMed Central

    Nesmelova, Irina V.; Hackett, Perry B.

    2010-01-01

    DNA transposons are mobile DNA elements that can move from one DNA molecule to another and thereby deliver genetic information into human chromosomes in order to confer a new function or replace a defective gene. This process requires a transposase enzyme. During transposition DD[E/D]-transposases undergo a series of conformational changes. We summarize the structural features of DD[E/D]-transposases for which three-dimensional structures are available and that relate to transposases, which are being developed for use in mammalian cells. Similar to other members of the polynucleotidyl transferase family, the catalytic domains of DD[E/D]-transposases share a common feature: an RNase H-like fold that draws three catalytically active residues, the DDE motif, into close proximity. Beyond this fold, the structures of catalytic domains vary considerably, and the DD[E/D]-transposases display marked structural diversity within their DNA-binding domains. Yet despite such structural variability, essentially the same end result is achieved. PMID:20615441

  15. Range accuracy analysis of streak tube imaging lidar systems

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Chen, Zhaodong; Yuan, Wei; Chen, Deying; He, Ping

    2016-02-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high range accuracy and a wide range gate with the use of a pulsed laser transmitter and streak tube receiver to produce 3D range images. This work investigates the range accuracy performance of STIL systems based on a peak detection algorithm, taking into account the effects of blurring of the image. A theoretical model of the time-resolved signal distribution, including the static blurring width in addition to the laser pulse width, is presented, resulting in a modified range accuracy analysis. The model indicates that the static blurring width has a significant effect on the range accuracy, which is validated by both the simulation and experimental results. By using the optimal static blurring width, the range accuracies are enhanced in both indoor and outdoor experiments, with a stand-off distance of 10 m and 1700 m, respectively, and corresponding, best range errors of 0.06 m and 0.25 m were achieved in a daylight environment.

  16. Accuracy potential of large-format still-video cameras

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd; Niederoest, Markus

    1997-07-01

    High resolution digital stillvideo cameras have found wide interest in digital close range photogrammetry in the last five years. They can be considered fully autonomous digital image acquisition systems without the requirement of permanent connection to an external power supply and a host computer for camera control and data storage, thus allowing for convenient data acquisition in many applications of digital photogrammetry. The accuracy potential of stillvideo cameras has been extensively discussed. While large format CCD sensors themselves can be considered very accurate measurement devices, lenses, camera bodies and sensor mounts of stillvideo cameras are not compression techniques in image storage, which may also affect the accuracy potential. This presentation shows recent experiences from accuracy tests with a number of large format stillvideo cameras, including a modified Kodak DCS200, a Kodak DCS460, a Nikon E2 and a Polaroid PDC-2000. The tests of the cameras include absolute and relative measurements and were performed using strong photogrammetric networks and good external reference. The results of the tests indicate that very high accuracies can be achieved with large blocks of stillvideo imagery especially in deformation measurements. In absolute measurements, however, the accuracy potential of the large format CCD sensors is partly ruined by a lack of stability of the cameras.

  17. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  18. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  19. Project ACHIEVE final report

    SciTech Connect

    1997-06-13

    Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.

  20. Achieving Goal Blood Pressure.

    PubMed

    Laurent, Stéphane

    2015-07-01

    Both monotherapy and combination therapy options are appropriate for antihypertensive therapy according to the 2013 European Society of Hypertension (ESH)/European Society of Cardiology (ESC) guidelines. Most patients require more than one agent to achieve blood pressure (BP) control, and adding a second agent is more effective than doubling the dose of existing therapy. The addition of a third agent may be required to achieve adequate BP reductions in some patients. Single-pill fixed-dose combinations (FDCs) allow multiple-drug regimens to be delivered without any negative impact on patient compliance or persistence with therapy. FDCs also have documented beneficial clinical effects and use of FDCs containing two or three agents is recommended by the 2013 ESH/ESC guidelines. PMID:26002423

  1. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  2. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  3. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  4. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  5. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  6. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  7. Melody Alignment and Similarity Metric for Content-Based Music Retrieval

    NASA Astrophysics Data System (ADS)

    Zhu, Yongwei; Kankanhalli, Mohan S.

    2003-01-01

    Music query-by-humming has attracted much research interest recently. It is a challenging problem since the hummed query inevitably contains much variation and inaccuracy. Furthermore, the similarity computation between the query tune and the reference melody is not easy due to the difficulty in ensuring proper alignment. This is because the query tune can be rendered at an unknown speed and it is usually an arbitrary subsequence of the target reference melody. Many of the previous methods, which adopt note segmentation and string matching, suffer drastically from the errors in the note segmentation, which affects retrieval accuracy and efficiency. Some methods solve the alignment issue by controlling the speed of the articulation of queries, which is inconvenient because it forces users to hum along a metronome. Some other techniques introduce arbitrary rescaling in time but this is computationally very inefficient. In this paper, we introduce a melody alignment technique, which addresses the robustness and efficiency issues. We also present a new melody similarity metric, which is performed directly on melody contours of the query data. This approach cleanly separates the alignment and similarity measurement in the search process. We show how to robustly and efficiently align the query melody with the reference melodies and how to measure the similarity subsequently. We have carried out extensive experiments. Our melody alignment method can reduce the matching candidate to 1.7% with 95% correct alignment rate. The overall retrieval system achieved 80% recall in the top 10 rank list. The results demonstrate the robustness and effectiveness the proposed methods.

  8. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  9. Investigation of psychophysical similarity measures for selection of similar images in the diagnosis of clustered microcalcifications on mammograms

    SciTech Connect

    Muramatsu, Chisako; Li Qiang; Schmidt, Robert; Shiraishi, Junji; Doi, Kunio

    2008-12-15

    The presentation of images with lesions of known pathology that are similar to an unknown lesion may be helpful to radiologists in the diagnosis of challenging cases for improving the diagnostic accuracy and also for reducing variation among different radiologists. The authors have been developing a computerized scheme for automatically selecting similar images with clustered microcalcifications on mammograms from a large database. For similar images to be useful, they must be similar from the point of view of the diagnosing radiologists. In order to select such images, subjective similarity ratings were obtained for a number of pairs of clustered microcalcifications by breast radiologists for establishment of a ''gold standard'' of image similarity, and the gold standard was employed for determination and evaluation of the selection of similar images. The images used in this study were obtained from the Digital Database for Screening Mammography developed by the University of South Florida. The subjective similarity ratings for 300 pairs of images with clustered microcalcifications were determined by ten breast radiologists. The authors determined a number of image features which represent the characteristics of clustered microcalcifications that radiologists would use in their diagnosis. For determination of objective similarity measures, an artificial neural network (ANN) was employed. The ANN was trained with the average subjective similarity ratings as teacher and selected image features as input data. The ANN was trained to learn the relationship between the image features and the radiologists' similarity ratings; therefore, once the training was completed, the ANN was able to determine the similarity, called a psychophysical similarity measure, which was expected to be close to radiologists' impressions, for an unknown pair of clustered microcalcifications. By use of a leave-one-out test method, the best combination of features was selected. The correlation

  10. High accuracy wall thickness loss monitoring

    NASA Astrophysics Data System (ADS)

    Gajdacsi, Attila; Cegla, Frederic

    2014-02-01

    Ultrasonic inspection of wall thickness in pipes is a standard technique applied widely in the petrochemical industry. The potential precision of repeat measurements with permanently installed ultrasonic sensors however significantly surpasses that of handheld sensors as uncertainties associated with coupling fluids and positional offsets are eliminated. With permanently installed sensors the precise evaluation of very small wall loss rates becomes feasible in a matter of hours. The improved accuracy and speed of wall loss rate measurements can be used to evaluate and develop more effective mitigation strategies. This paper presents an overview of factors causing variability in the ultrasonic measurements which are then systematically addressed and an experimental setup with the best achievable stability based on these considerations is presented. In the experimental setup galvanic corrosion is used to induce predictable and very small wall thickness loss. Furthermore, it is shown that the experimental measurements can be used to assess the effect of reduced wall loss that is produced by the injection of corrosion inhibitor. The measurements show an estimated standard deviation of about 20nm, which in turn allows us to evaluate the effect and behaviour of corrosion inhibitors within less than an hour.

  11. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  12. An Experimental Study on the Iso-Content-Based Angle Similarity Measure.

    ERIC Educational Resources Information Center

    Zhang, Jin; Rasmussen, Edie M.

    2002-01-01

    Retrieval performance of the iso-content-based angle similarity measure within the angle, distance, conjunction, disjunction, and ellipse retrieval models is compared with retrieval performance of the distance similarity measure and the angle similarity measure. Results show the iso-content-based angle similarity measure achieves satisfactory…

  13. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  14. Accuracy of cloud liquid water path from ground-based microwave radiometry 2. Sensor accuracy and synergy

    NASA Astrophysics Data System (ADS)

    Crewell, Susanne; LöHnert, Ulrich

    2003-06-01

    The influence of microwave radiometer accuracy on retrieved cloud liquid water path (LWP) was investigated. Sensor accuracy was assumed to be the sum of the relative (i.e., Gaussian noise) and the absolute accuracies of brightness temperatures. When statistical algorithms are developed the assumed noise should be as close as possible to the real measurements in order to avoid artifacts in the retrieved LWP distribution. Typical offset errors of 1 K in brightness temperatures can produce mean LWP errors of more than 30 g m-2 for a two-channel radiometer retrieval, although positively correlated brightness temperature offsets in both channels reduce this error to 16 g m-2. Large improvements in LWP retrieval accuracy of about 50% can be achieved by adding a 90-GHz channel to the two-channel retrieval. The inclusion of additional measurements, like cloud base height from a lidar ceilometer and cloud base temperature from an infrared radiometer, is invaluable in detecting cloud free scenes allowing an indirect evaluation of LWP accuracy in clear sky cases. This method was used to evaluate LWP retrieval algorithms based on different gas absorption models. Using two months of measurements, the Liebe 93 model provided the best results when the 90-GHz channel was incorporated into the standard two-channel retrievals.

  15. Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures

    ERIC Educational Resources Information Center

    Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John

    2013-01-01

    The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…

  16. Total solar irradiance record accuracy and recent improvements

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    /TIMs are intended to achieve levels of absolute accuracy that should reduce the TSI record's reliance on measurement continuity. I will discuss the climate-derived requirements for the levels of absolute accuracy and instrument stability needed for TSI measurements and describe current work that is underway to achieve these measurement requirements.

  17. SSL: Signal Similarity-Based Localization for Ocean Sensor Networks

    PubMed Central

    Chen, Pengpeng; Ma, Honglu; Gao, Shouwan; Huang, Yan

    2015-01-01

    Nowadays, wireless sensor networks are often deployed on the sea surface for ocean scientific monitoring. One of the important challenges is to localize the nodes’ positions. Existing localization schemes can be roughly divided into two types: range-based and range-free. The range-based localization approaches heavily depend on extra hardware capabilities, while range-free ones often suffer from poor accuracy and low scalability, far from the practical ocean monitoring applications. In response to the above limitations, this paper proposes a novel signal similarity-based localization (SSL) technology, which localizes the nodes’ positions by fully utilizing the similarity of received signal strength and the open-air characteristics of the sea surface. In the localization process, we first estimate the relative distance between neighboring nodes through comparing the similarity of received signal strength and then calculate the relative distance for non-neighboring nodes with the shortest path algorithm. After that, the nodes’ relative relation map of the whole network can be obtained. Given at least three anchors, the physical locations of nodes can be finally determined based on the multi-dimensional scaling (MDS) technology. The design is evaluated by two types of ocean experiments: a zonal network and a non-regular network using 28 nodes. Results show that the proposed design improves the localization accuracy compared to typical connectivity-based approaches and also confirm its effectiveness for large-scale ocean sensor networks. PMID:26610520

  18. SSL: Signal Similarity-Based Localization for Ocean Sensor Networks.

    PubMed

    Chen, Pengpeng; Ma, Honglu; Gao, Shouwan; Huang, Yan

    2015-01-01

    Nowadays, wireless sensor networks are often deployed on the sea surface for ocean scientific monitoring. One of the important challenges is to localize the nodes' positions. Existing localization schemes can be roughly divided into two types: range-based and range-free. The range-based localization approaches heavily depend on extra hardware capabilities, while range-free ones often suffer from poor accuracy and low scalability, far from the practical ocean monitoring applications. In response to the above limitations, this paper proposes a novel signal similarity-based localization (SSL) technology, which localizes the nodes' positions by fully utilizing the similarity of received signal strength and the open-air characteristics of the sea surface. In the localization process, we first estimate the relative distance between neighboring nodes through comparing the similarity of received signal strength and then calculate the relative distance for non-neighboring nodes with the shortest path algorithm. After that, the nodes' relative relation map of the whole network can be obtained. Given at least three anchors, the physical locations of nodes can be finally determined based on the multi-dimensional scaling (MDS) technology. The design is evaluated by two types of ocean experiments: a zonal network and a non-regular network using 28 nodes. Results show that the proposed design improves the localization accuracy compared to typical connectivity-based approaches and also confirm its effectiveness for large-scale ocean sensor networks. PMID:26610520

  19. High accuracy localization of long term evolution based on a new multiple carrier noise model.

    PubMed

    Lee, Wah Ching; Hung, Faan Hei; Tsang, Kim Fung; Wu, Chung Kit; Chi, Hao Ran; Chui, Kwok Tai; Lau, Wing Hong

    2014-01-01

    A high accuracy localization technique using Long Term Evolution (LTE) based on a new and accurate multiple carrier noise model has been developed. In the noise consideration, the LTE multiple carriers phase noise has been incorporated so that a new and accurate noise model is achieved. An experiment was performed to characterize the phase noise of carriers at 2 GHz. The developed noise model was incorporated into LTE localization analysis in a high traffic area in Hong Kong to evaluate the accuracy of localization. The evaluation and analysis reveals that the new localization method achieves an improvement of about 10% accuracy comparing to existing widely adopted schemes. PMID:25436658

  20. High Accuracy Localization of Long Term Evolution Based on a New Multiple Carrier Noise Model

    PubMed Central

    Lee, Wah Ching; Hung, Faan Hei; Tsang, Kim Fung; Wu, Chung Kit; Chi, Hao Ran; Chui, Kwok Tai; Lau, Wing Hong

    2014-01-01

    A high accuracy localization technique using Long Term Evolution (LTE) based on a new and accurate multiple carrier noise model has been developed. In the noise consideration, the LTE multiple carriers phase noise has been incorporated so that a new and accurate noise model is achieved. An experiment was performed to characterize the phase noise of carriers at 2 GHz. The developed noise model was incorporated into LTE localization analysis in a high traffic area in Hong Kong to evaluate the accuracy of localization. The evaluation and analysis reveals that the new localization method achieves an improvement of about 10% accuracy comparing to existing widely adopted schemes. PMID:25436658

  1. A real-time microprocessor QRS detector system with a 1-ms timing accuracy for the measurement of ambulatory HRV.

    PubMed

    Ruha, A; Sallinen, S; Nissilä, S

    1997-03-01

    The design, test methods and results of an ambulatory QRS detector are presented. The device is intended for the accurate measurement of heart rate variability (HRV) and reliable QRS detection in both ambulatory and clinical use. The aim of the design work was to achieve high QRS detection performance in terms of timing accuracy and reliability, without compromising the size and power consumption of the device. The complete monitor system consists of a host computer and the detector unit. The detector device is constructed of a commonly available digital signal processing (DSP) microprocessor and other components. The QRS detection algorithm uses optimized prefiltering in conjunction with a matched filter and dual edge threshold detection. The purpose of the prefiltering is to attenuate various noise components in order to achieve improved detection reliability. The matched filter further improves signal-to-noise ratio (SNR) and symmetries the QRS complex for the threshold detection, which is essential in order to achieve the desired performance. The decision for detection is made in real-time and no search-back method is employed. The host computer is used to configure the detector unit, which includes the setting of the matched filter impulse response, and in the retrieval and postprocessing of the measurement results. The QRS detection timing accuracy and detection reliability of the detector system was tested with an artificially generated electrocardiogram (ECG) signal corrupted with various noise types and a timing standard deviation of less than 1 ms was achieved with most noise types and levels similar to those encountered in real measurements. A QRS detection error rate (ER) of 0.1 and 2.2% was achieved with records 103 and 105 from the MIT-BIH Arrhythmia database, respectively. PMID:9216129

  2. Effects of coating material on the fabrication accuracy of focused ion beam machining of insulators

    NASA Astrophysics Data System (ADS)

    Joe, Hang-Eun; Park, Jae-Hyeong; Kim, Seong Hyeon; Kim, Gyuho; Jun, Martin B. G.; Min, Byung-Kwon

    2015-09-01

    Focused ion beam (FIB) machining of insulators is a crucial process in the rapid prototyping of nanodevices for optical applications. A conductive material is generally coated on the insulator prior to FIB machining to achieve high fabrication accuracy. In this paper, we report on the effects on machining accuracy of four coating materials: Pt, Ni, Ag, and Co. The dimensional accuracy at channel sidewalls was improved by selecting a coating material that induces charge-carrier generation in a small range. The geometric and electrical characteristics of the FIB-machined surfaces were evaluated to elucidate the association between the fabrication accuracy and the range of charge-carrier distribution.

  3. Achieving Magnet status.

    PubMed

    Ellis, Beckie; Gates, Judy

    2005-01-01

    Magnet has become the gold standard for nursing excellence. It is the symbol of effective and safe patient care. It evaluates components that inspire safe care, including employee satisfaction and retention, professional education, and effective interdisciplinary collaboration. In an organization whose mission focuses on excellent patient care, Banner Thunderbird Medical Center found that pursuing Magnet status was clearly the next step. In this article, we will discuss committee selection, education, team building, planning, and the discovery process that define the Magnet journey. The road to obtaining Magnet status has permitted many opportunities to celebrate our achievements. PMID:16056158

  4. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity. PMID:27266422

  5. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    PubMed

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html. PMID:25957673

  6. An Iterative Image Registration Algorithm by Optimizing Similarity Measurement

    PubMed Central

    Chu, Wei; Ma, Li; Song, John; Vorburger, Theodore

    2010-01-01

    A new registration algorithm based on Newton-Raphson iteration is proposed to align images with rigid body transformation. A set of transformation parameters consisting of translation in x and y and rotation angle around z is calculated by optimizing a specified similarity metric using the Newton-Raphson method. This algorithm has been tested by registering and correlating pairs of topography measurements of nominally identical NIST Standard Reference Material (SRM 2461) standard cartridge cases, and very good registration accuracy has been obtained. PMID:27134776

  7. Visual similarity effects in categorical search.

    PubMed

    Alexander, Robert G; Zelinsky, Gregory J

    2011-01-01

    We asked how visual similarity relationships affect search guidance to categorically defined targets (no visual preview). Experiment 1 used a web-based task to collect visual similarity rankings between two target categories, teddy bears and butterflies, and random-category objects, from which we created search displays in Experiment 2 having either high-similarity distractors, low-similarity distractors, or "mixed" displays with high-, medium-, and low-similarity distractors. Analysis of target-absent trials revealed faster manual responses and fewer fixated distractors on low-similarity displays compared to high-similarity displays. On mixed displays, first fixations were more frequent on high-similarity distractors (bear = 49%; butterfly = 58%) than on low-similarity distractors (bear = 9%; butterfly = 12%). Experiment 3 used the same high/low/mixed conditions, but now these conditions were created using similarity estimates from a computer vision model that ranked objects in terms of color, texture, and shape similarity. The same patterns were found, suggesting that categorical search can indeed be guided by purely visual similarity. Experiment 4 compared cases where the model and human rankings differed and when they agreed. We found that similarity effects were best predicted by cases where the two sets of rankings agreed, suggesting that both human visual similarity rankings and the computer vision model captured features important for guiding search to categorical targets. PMID:21757505

  8. Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu

    2015-04-01

    Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the

  9. The Impact of Personal Digital Assistants on Academic Achievement

    ERIC Educational Resources Information Center

    Bick, Alexander

    2005-01-01

    A positive correlation has been found between laptops and student achievement. Laptops are similar to Personal Digital Assistants (PDAs) in many respects. This study seeks to determine the effect of PDA usage on high school student academic achievement. It was hypothesized that a positive correlation between PDA usage and academic achievement in…

  10. Does aging impair first impression accuracy? Differentiating emotion recognition from complex social inferences.

    PubMed

    Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini

    2014-09-01

    Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PMID:25244469

  11. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  12. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  13. Recognizing outstanding achievements

    NASA Astrophysics Data System (ADS)

    Speiss, Fred

    One function of any professional society is to provide an objective, informed means for recognizing outstanding achievements in its field. In AGU's Ocean Sciences section we have a variety of means for carrying out this duty. They include recognition of outstanding student presentations at our meetings, dedication of special sessions, nomination of individuals to be fellows of the Union, invitations to present Sverdrup lectures, and recommendations for Macelwane Medals, the Ocean Sciences Award, and the Ewing Medal.Since the decision to bestow these awards requires initiative and judgement by members of our section in addition to a deserving individual, it seems appropriate to review the selection process for each and to urge you to identify those deserving of recognition.

  14. Achieving closure at Fernald

    SciTech Connect

    Bradburne, John; Patton, Tisha C.

    2001-02-25

    When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.

  15. Asthma and COPD: Differences and Similarities

    MedlinePlus

    ... and COPD: differences and similarities Share | Asthma and COPD: Differences and Similarities This article has been reviewed ... you could have asthma, or you could have Chronic Obstructive Pulmonary Disease (COPD) , such as emphysema or chronic bronchitis. Because ...

  16. The accuracy of portable peak flow meters.

    PubMed Central

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. RESULTS: For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. CONCLUSION: Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement. PMID:1465746

  17. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…

  18. Reconstructing of a Sequence Using Similar Sequences

    1995-11-28

    SIMSEQ reconstructs sequences from oligos. Similar known sequences are used as a reference. At present, simulated data are being used to develop the algorithm. SIMSEQ generates an initial random sequence, then generates a second sequence that is 60 to 90 percent similar to the first. Next, the second sequence is chopped into its appropriate oligos. All possible sequences are reconstructed to determine the most similar. Those with the highest similarity are printed as output.

  19. Thematic Relations Affect Similarity via Commonalities

    ERIC Educational Resources Information Center

    Golonka, Sabrina; Estes, Zachary

    2009-01-01

    Thematic relations are an important source of perceived similarity. For instance, the "rowing" theme of boats and oars increases their perceived similarity. The mechanism of this effect, however, has not been specified previously. The authors investigated whether thematic relations affect similarity by increasing commonalities or by decreasing…

  20. Achievement Goals and Achievement Emotions: A Meta-Analysis

    ERIC Educational Resources Information Center

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  1. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane. PMID:26513777

  2. Differential signal scatterometry overlay metrology: an accuracy investigation

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Adel, Mike; Dinu, Berta; Golovanevsky, Boris; Izikson, Pavel; Levinski, Vladimir; Vakshtein, Irina; Leray, Philippe; Vasconi, Mauro; Salski, Bartlomiej

    2007-06-01

    The overlay control budget for the 32nm technology node will be 5.7nm according to the ITRS. The overlay metrology budget is typically 1/10 of the overlay control budget resulting in overlay metrology total measurement uncertainty (TMU) requirements of 0.57nm for the most challenging use cases of the 32nm node. The current state of the art imaging overlay metrology technology does not meet this strict requirement, and further technology development is required to bring it to this level. In this work we present results of a study of an alternative technology for overlay metrology - Differential signal scatterometry overlay (SCOL). Theoretical considerations show that overlay technology based on differential signal scatterometry has inherent advantages, which will allow it to achieve the 32nm technology node requirements and go beyond it. We present results of simulations of the expected accuracy associated with a variety of scatterometry overlay target designs. We also present our first experimental results of scatterometry overlay measurements, comparing this technology with the standard imaging overlay metrology technology. In particular, we present performance results (precision and tool induced shift) and address the issue of accuracy of scatterometry overlay. We show that with the appropriate target design and algorithms scatterometry overlay achieves the accuracy required for future technology nodes.

  3. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    SciTech Connect

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  4. Accuracy of discrimination, rate of responding, and resistance to change.

    PubMed Central

    Nevin, John A; Milo, Jessica; Odum, Amy L; Shahan, Timothy A

    2003-01-01

    Pigeons were trained on multiple schedules in which responding on a center key produced matching-to-sample trials according to the same variable-interval 30-s schedules in both components. Matching trials consisted of a vertical or tilted line sample on the center key followed by vertical and tilted comparisons on the side keys. Correct responses to comparison stimuli were reinforced with probability .80 in the rich component and .20 in the lean component. Baseline response rates and matching accuracies generally were higher in the rich component, consistent with previous research. When performance was disrupted by prefeeding, response-independent food during intercomponent intervals, intrusion of a delay between sample and comparison stimuli, or extinction, both response rates and matching accuracies generally decreased. Proportions of baseline response rate were greater in the rich component for all disrupters except delay, which had relatively small and inconsistent effects on response rate. By contrast, delay had large and consistent effects on matching accuracy, and proportions of baseline matching accuracy were greater in the rich component for all four disrupters. The dissociation of response rate and accuracy with delay reflects the localized impact of delay on matching performance. The similarity of the data for response rate and accuracy with prefeeding, response-independent food, and extinction shows that matching performance, like response rate, is more resistant to change in a rich than in a lean component. This result extends resistance to change analyses from the frequency of response emission to the degree of stimulus control, and suggests that the strength of discriminating, like the strength of responding, is positively related to rate of reinforcement. PMID:12908760

  5. Halo abundance matching: accuracy and conditions for numerical convergence

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan

    2015-03-01

    Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.

  6. Objective sampling with EAGLE to improve acoustic prediction accuracy

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2003-10-01

    Some Navy operations require extensive acoustic calculations. The standard computational approach is to calculate on a regular grid of points and radials. In complex environmental areas, this implies a dense grid and many radials (i.e., long run times) to achieve acceptable accuracy and detail. However, Navy tactical decision aid calculations must be timely and exhibit adequate accuracy or the results may be too old or too imprecise to be valuable. This dilemma led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces irregular acoustic grids [Rike and DelBalzo, Proc. IEEE Oceans (2002)]. Its premise is that physical environmental complexity controls the need for dense sampling in space and azimuth, and that transmission loss already computed for nearby coordinates on previous iterations can be used to predict that complexity. Recent work in this area to further increase accuracy and efficiency by using better metrics and interpolation routines has led to the Efficient Acoustic Gridder for Littoral Environments (EAGLE). On each iteration, EAGLE produces an acoustic field for the entire area of interest with ever-increasing resolution and accuracy. An example is presented where approximately an order of magnitude efficiency improvement (over regular grids) is demonstrated. [Work sponsored by ONR.

  7. Random forest-based similarity measures for multi-modal classification of Alzheimer's disease.

    PubMed

    Gray, Katherine R; Aljabar, Paul; Heckemann, Rolf A; Hammers, Alexander; Rueckert, Daniel

    2013-01-15

    Neurodegenerative disorders, such as Alzheimer's disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multi-modality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs the classification based on any individual modality for comparisons between Alzheimer's disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer's disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differs by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the future

  8. Random forest-based similarity measures for multi-modal classification of Alzheimer’s disease

    PubMed Central

    Gray, Katherine R.; Aljabar, Paul; Heckemann, Rolf A.; Hammers, Alexander; Rueckert, Daniel

    2012-01-01

    Neurodegenerative disorders, such as Alzheimer’s disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multimodality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs classification based on any individual modality for comparisons between Alzheimer’s disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer’s disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differ by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the

  9. Genetic similarity and hatching success in birds.

    PubMed Central

    Spottiswoode, Claire; Møller, Anders Pape

    2004-01-01

    The ecological correlates of fitness costs of genetic similarity in free-living, large populations of organisms are poorly understood. Using a dataset of genetic similarity as reflected by band-sharing coefficients of minisatellites, we show that bird species with higher genetic similarity experience elevated hatching failure of eggs, increasing by a factor of six across 99 species. Island distributions and cooperative breeding systems in particular were associated with elevated genetic similarity. These findings provide comparative evidence of detrimental fitness consequences of high genetic similarity across a wide range of species, and help to identify ecological factors potentially associated with increased risk of extinction. PMID:15058437

  10. Using Reaction Mechanism to Measure Enzyme Similarity

    PubMed Central

    O’Boyle, Noel M.; Holliday, Gemma L.; Almonacid, Daniel E.; Mitchell, John B.O.

    2012-01-01

    Summary The concept of reaction similarity has been well-studied in terms of the overall transformation associated with a reaction, but not in terms of mechanism. We present the first method to give a quantitative measure of the similarity of reactions based upon their explicit mechanisms. Two approaches are presented to measure the similarity between individual steps of mechanisms: a fingerprint-based approach which incorporates relevant information on each mechanistic step, and an approach based only on bond formation, cleavage and changes in order. The overall similarity for two reaction mechanisms is then calculated using the Needleman-Wunsch alignment algorithm. An analysis of MACiE, a database of enzyme mechanisms, using our measure of similarity identifies some examples of convergent evolution of chemical mechanism. In many cases mechanism similarity is not reflected by similarity according to the EC system of enzyme classification. In particular, little mechanistic information is conveyed by the class level of the EC. PMID:17400244

  11. Entrepreneur achievement. Liaoning province.

    PubMed

    Zhao, R

    1994-03-01

    This paper reports the successful entrepreneurial endeavors of members of a 20-person women's group in Liaoning Province, China. Jing Yuhong, a member of the Family Planning Association at Shileizi Village, Dalian City, provided the basis for their achievements by first building an entertainment/study room in her home to encourage married women to learn family planning. Once stocked with books, magazines, pamphlets, and other materials on family planning and agricultural technology, dozens of married women in the neighborhood flocked voluntarily to the room. Yuhong also set out to give these women a way to earn their own income as a means of helping then gain greater equality with their husbands and exert greater control over their personal reproductive and social lives. She gave a section of her farming land to the women's group, loaned approximately US$5200 to group members to help them generate income from small business initiatives, built a livestock shed in her garden for the group to raise marmots, and erected an awning behind her house under which mushrooms could be grown. The investment yielded $12,000 in the first year, allowing each woman to keep more than $520 in dividends. Members then soon began going to fairs in the capital and other places to learn about the outside world, and have successfully ventured out on their own to generate individual incomes. Ten out of twenty women engaged in these income-generating activities asked for and got the one-child certificate. PMID:12287775

  12. Solving the accuracy-diversity dilemma via directed random walks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Shi, Kerui; Guo, Qiang

    2012-01-01

    Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.

  13. Solving the accuracy-diversity dilemma via directed random walks.

    PubMed

    Liu, Jian-Guo; Shi, Kerui; Guo, Qiang

    2012-01-01

    Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance. PMID:22400636

  14. Imputation Accuracy from Low to Moderate Density Single Nucleotide Polymorphism Chips in a Thai Multibreed Dairy Cattle Population

    PubMed Central

    Jattawa, Danai; Elzo, Mauricio A.; Koonawootrittriron, Skorn; Suwanasopee, Thanathip

    2016-01-01

    The objective of this study was to investigate the accuracy of imputation from low density (LDC) to moderate density SNP chips (MDC) in a Thai Holstein-Other multibreed dairy cattle population. Dairy cattle with complete pedigree information (n = 1,244) from 145 dairy farms were genotyped with GeneSeek GGP20K (n = 570), GGP26K (n = 540) and GGP80K (n = 134) chips. After checking for single nucleotide polymorphism (SNP) quality, 17,779 SNP markers in common between the GGP20K, GGP26K, and GGP80K were used to represent MDC. Animals were divided into two groups, a reference group (n = 912) and a test group (n = 332). The SNP markers chosen for the test group were those located in positions corresponding to GeneSeek GGP9K (n = 7,652). The LDC to MDC genotype imputation was carried out using three different software packages, namely Beagle 3.3 (population-based algorithm), FImpute 2.2 (combined family- and population-based algorithms) and Findhap 4 (combined family- and population-based algorithms). Imputation accuracies within and across chromosomes were calculated as ratios of correctly imputed SNP markers to overall imputed SNP markers. Imputation accuracy for the three software packages ranged from 76.79% to 93.94%. FImpute had higher imputation accuracy (93.94%) than Findhap (84.64%) and Beagle (76.79%). Imputation accuracies were similar and consistent across chromosomes for FImpute, but not for Findhap and Beagle. Most chromosomes that showed either high (73%) or low (80%) imputation accuracies were the same chromosomes that had above and below average linkage disequilibrium (LD; defined here as the correlation between pairs of adjacent SNP within chromosomes less than or equal to 1 Mb apart). Results indicated that FImpute was more suitable than Findhap and Beagle for genotype imputation in this Thai multibreed population. Perhaps additional increments in imputation accuracy could be achieved by increasing the completeness of pedigree information. PMID:26949946

  15. Imputation Accuracy from Low to Moderate Density Single Nucleotide Polymorphism Chips in a Thai Multibreed Dairy Cattle Population.

    PubMed

    Jattawa, Danai; Elzo, Mauricio A; Koonawootrittriron, Skorn; Suwanasopee, Thanathip

    2016-04-01

    The objective of this study was to investigate the accuracy of imputation from low density (LDC) to moderate density SNP chips (MDC) in a Thai Holstein-Other multibreed dairy cattle population. Dairy cattle with complete pedigree information (n = 1,244) from 145 dairy farms were genotyped with GeneSeek GGP20K (n = 570), GGP26K (n = 540) and GGP80K (n = 134) chips. After checking for single nucleotide polymorphism (SNP) quality, 17,779 SNP markers in common between the GGP20K, GGP26K, and GGP80K were used to represent MDC. Animals were divided into two groups, a reference group (n = 912) and a test group (n = 332). The SNP markers chosen for the test group were those located in positions corresponding to GeneSeek GGP9K (n = 7,652). The LDC to MDC genotype imputation was carried out using three different software packages, namely Beagle 3.3 (population-based algorithm), FImpute 2.2 (combined family- and population-based algorithms) and Findhap 4 (combined family- and population-based algorithms). Imputation accuracies within and across chromosomes were calculated as ratios of correctly imputed SNP markers to overall imputed SNP markers. Imputation accuracy for the three software packages ranged from 76.79% to 93.94%. FImpute had higher imputation accuracy (93.94%) than Findhap (84.64%) and Beagle (76.79%). Imputation accuracies were similar and consistent across chromosomes for FImpute, but not for Findhap and Beagle. Most chromosomes that showed either high (73%) or low (80%) imputation accuracies were the same chromosomes that had above and below average linkage disequilibrium (LD; defined here as the correlation between pairs of adjacent SNP within chromosomes less than or equal to 1 Mb apart). Results indicated that FImpute was more suitable than Findhap and Beagle for genotype imputation in this Thai multibreed population. Perhaps additional increments in imputation accuracy could be achieved by increasing the completeness of pedigree information. PMID:26949946

  16. The Homogeneity of School Achievement.

    ERIC Educational Resources Information Center

    Cahan, Sorel

    Since the measurement of school achievement involves the administration of achievement tests to various grades on various subjects, both grade level and subject matter contribute to within-school achievement variations. To determine whether achievement test scores vary most among different fields within a grade level, or within fields among…

  17. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  18. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  19. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  20. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  1. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  2. Individual Differences in Eyewitness Recall Accuracy.

    ERIC Educational Resources Information Center

    Berger, James D.; Herringer, Lawrence G.

    1991-01-01

    Presents study results comparing college students' self-evaluation of recall accuracy to actual recall of detail after viewing a crime scenario. Reports that self-reported ability to remember detail correlates with accuracy in memory of specifics. Concludes that people may have a good indication early in the eyewitness situation of whether they…

  3. Scientific Sources' Perception of Network News Accuracy.

    ERIC Educational Resources Information Center

    Moore, Barbara; Singletary, Michael

    Recent polls seem to indicate that many Americans rely on television as a credible and primary source of news. To test the accuracy of this news, a study examined three networks' newscasts of science news, the attitudes of the science sources toward reporting in their field, and the factors related to accuracy. The Vanderbilt News Archives Index…

  4. Accuracy of Carbohydrate Counting in Adults.

    PubMed

    Meade, Lisa T; Rushton, Wanda E

    2016-07-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  5. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  6. Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories

    NASA Technical Reports Server (NTRS)

    Kay-Bunnell, Linda

    2004-01-01

    The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.

  7. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  8. Optimizing the geometrical accuracy of curvilinear meshes

    NASA Astrophysics Data System (ADS)

    Toulorge, Thomas; Lambrechts, Jonathan; Remacle, Jean-François

    2016-04-01

    This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Hausdorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a particular role of the enhanced mesh boundary smoothness.

  9. Testing the accuracy of synthetic stellar libraries

    NASA Astrophysics Data System (ADS)

    Martins, Lucimara P.; Coelho, Paula

    2007-11-01

    One of the main ingredients of stellar population synthesis models is a library of stellar spectra. Both empirical and theoretical libraries are used for this purpose, and the question about which one is preferable is still debated in the literature. Empirical and theoretical libraries are being improved significantly over the years, and many libraries have become available lately. However, it is not clear in the literature what are the advantages of using each of these new libraries, and how far behind models are compared to observations. Here we compare in detail some of the major theoretical libraries available in the literature with observations, aiming at detecting weaknesses and strengths from the stellar population modelling point of view. Our test is twofold: we compared model predictions and observations for broad-band colours and for high-resolution spectral features. Concerning the broad-band colours, we measured the stellar colour given by three recent sets of model atmospheres and flux distributions, and compared them with a recent UBVRIJHK calibration which is mostly based on empirical data. We found that the models can reproduce with reasonable accuracy the stellar colours for a fair interval in effective temperatures and gravities. The exceptions are (1) the U - B colour, where the models are typically redder than the observations, and (2) the very cool stars in general (V - K >~ 3). Castelli & Kurucz is the set of models that best reproduce the bluest colours (U - B, B - V) while Gustafsson et al. and Brott & Hauschildt more accurately predict the visual colours. The three sets of models perform in a similar way for the infrared colours. Concerning the high-resolution spectral features, we measured 35 spectral indices defined in the literature on three high-resolution synthetic libraries, and compared them with the observed measurements given by three empirical libraries. The measured indices cover the wavelength range from ~3500 to ~8700Å. We

  10. The adaptive accuracy of flowers: measurement and microevolutionary patterns

    PubMed Central

    Armbruster, W. Scott; Hansen, Thomas F.; Pélabon, Christophe; Pérez-Barrales, Rocío; Maad, Johanne

    2009-01-01

    Background and Aims From Darwin's time onward, biologists have thought about adaptation as evolution toward optimal trait values, but they have not usually assessed the relative importance of the distinct causes of deviations from optima. This problem is investigated here by measuring adaptive inaccuracy (phenotypic deviation from the optimum), using flower pollination as an adaptive system. Methods Adaptive accuracy is shown to have at least three distinct components, two of which are optimality (deviation of the mean from the optimum) and precision (trait variance). We then describe adaptive accuracy of both individuals and populations. Individual inaccuracy comprises the deviation of the genotypic target (the mean phenotype of a genotype grown in a range of environments) from the optimum and the phenotypic variation around that genotypic target (phenotypic imprecision). Population inaccuracy has three basic components: deviation of the population mean from the optimum, variance in the genotypic targets and phenotypic imprecision. In addition, a fourth component is proposed, namely within-population variation in the optimum. These components are directly estimable, have additive relationships, and allow exploration of the causes of adaptive inaccuracy of both individuals and populations. Adaptive accuracy of a sample of flowers is estimated, relating floral phenotypes controlling pollen deposition on pollinators to adaptive optima defined as the site most likely to get pollen onto stigmas (male inaccuracy). Female inaccuracy is defined as the deviation of the position of stigma contact from the expected location of pollen on pollinators. Key Results A surprising amount of variation in estimated accuracy within and among similar species is found. Some of this variation is generated by developmental changes in positions of stigmas or anthers during anthesis (the floral receptive period), which can cause dramatic change in accuracy estimates. There seem to be trends

  11. Mapping shorelines to subpixel accuracy using Landsat imagery

    NASA Astrophysics Data System (ADS)

    Abileah, Ron; Vignudelli, Stefano; Scozzari, Andrea

    2013-04-01

    A promising method to accurately map the shoreline of oceans, lakes, reservoirs, and rivers is proposed and verified in this work. The method is applied to multispectral satellite imagery in two stages. The first stage is a classification of each image pixel into land/water categories using the conventional 'dark pixel' method. The approach presented here, makes use of a single shortwave IR image band (SWIR), if available. It is well known that SWIR has the least water leaving radiance and relatively little sensitivity to water pollutants and suspended sediments. It is generally the darkest (over water) and most reliable single band for land-water discrimination. The boundary of the water cover map determined in stage 1 underestimates the water cover and often misses the true shoreline by a quantity up to one pixel. A more accurate shoreline would be obtained by connecting the center point of pixels with exactly 50-50 mix of water and land. Then, stage 2 finds the 50-50 mix points. According to the method proposed, image data is interpolated and up-sampled to ten times the original resolution. The local gradient in radiance is used to find the direction to the shore, thus searching along that path for the interpolated pixel closest to a 50-50 mix. Landsat images with 30m resolution, processed by this method, may thus provide the shoreline accurate to 3m. Compared to similar approaches available in the literature, the method proposed discriminates sub-pixels crossed by the shoreline by using a criteria based on the absolute value of radiance, rather than its gradient. Preliminary experimentation of the algorithm shows that 10m resolution accuracy is easily achieved and in some cases is often better than 5m. The proposed method can be used to study long term shoreline changes by exploiting the 30 years of archived world-wide coverage Landsat imagery. Landsat imagery is free and easily accessible for downloading. Some applications that exploit the Landsat dataset and

  12. Evaluating Whole Chemical Mixtures and Sufficient Similarity

    EPA Science Inventory

    This powerpoint presentation supports apresentation describing dose-response assessment for complex chemical mixtures including deriving reference doses for mixtures evaluating sufficient similarity among chemical mixtures.

  13. High accuracy ground target location using loitering munitions platforms

    NASA Astrophysics Data System (ADS)

    Wang, Zhifei; Wang, Hua; Han, Jing

    2011-08-01

    Precise ground target localization is an interesting problem and relevant not only for military but also for civilian applications, and this is expected to be an emerging field with many potential applications. Ground Target Location Using Loitering Munitions (LM) requires estimation of aircraft position and attitude to a high degree of accuracy, and data derived by processing sensor images might be useful for supplementing other navigation sensor information and increasing the reliability and accuracy of navigation estimates during this flight phase. This paper presents a method for high accuracy ground target localization using Loitering Munitions (LM) equipped with a video camera sensor. The proposed method is based on a satellite or aerial image matching technique. In order to acquire the target position of ground intelligently and rapidly and to improve the localization accuracy estimating the target position jointly with the systematic LM and camera attitude measurement errors, several techniques have been proposed. Firstly, ground target geo-location based on tray tracing was used for comparison against our approach. By proposed methods the calculation from pixel to world coordinates can be done. Then Hough transform was used to image alignment and a median filter was applied for removing small details which are visible from the sensed image but not visible from the reference image. Finally, A novel edge detection method and an image matching algorithm based on bifurcation extraction were proposed. This method did not require accurate knowledge of the aircraft position and attitude and high performance sensors, therefore it is especially suitable for LM which did not have capability to carry accurate sensors due to their limited play weight and power resources. The results of simulation experiments and theory analyzing demonstrate that high accuracy ground target localization is reached with low performance sensors, and achieve timely. The method is used in

  14. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM

    PubMed Central

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2015-01-01

    Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences. PMID:26788119

  15. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM.

    PubMed

    Liang, Yunyun; Liu, Sanyang; Zhang, Shengli

    2015-01-01

    Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM). Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS), segmented PsePSSM, and segmented autocovariance transformation (ACT) based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640) are adopted in this paper. Then a 700-dimensional (700D) feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA). To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences. PMID:26788119

  16. Large-Scale Chemical Similarity Networks for Target Profiling of Compounds Identified in Cell-Based Chemical Screens

    PubMed Central

    Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.

    2015-01-01

    Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798

  17. The Role of Feedback on Studying, Achievement and Calibration.

    ERIC Educational Resources Information Center

    Chu, Stephanie T. L.; Jamieson-Noel, Dianne L.; Winne, Philip H.

    One set of hypotheses examined in this study was that various types of feedback (outcome, process, and corrective) supply different information about performance and have different effects on studying processes and on achievement. Another set of hypotheses concerned students' calibration, their accuracy in predicting and postdicting achievement…

  18. Two Phase Non-Rigid Multi-Modal Image Registration Using Weber Local Descriptor-Based Similarity Metrics and Normalized Mutual Information

    PubMed Central

    Yang, Feng; Ding, Mingyue; Zhang, Xuming; Wu, Yi; Hu, Jiani

    2013-01-01

    Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI) and sum of squared differences (SSD) cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD) based similarity metrics with the normalized mutual information (NMI) using the diffeomorphic free-form deformation (FFD) model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD) or weighted structural similarity (wldWSSIM). Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI), the SSD on entropy images (ESSD) and the ESSD-NMI in terms of registration accuracy and computation efficiency. PMID:23765270

  19. RAPSearch: a fast protein similarity search tool for short reads

    PubMed Central

    2011-01-01

    Background Next Generation Sequencing (NGS) is producing enormous corpuses of short DNA reads, affecting emerging fields like metagenomics. Protein similarity search--a key step to achieve annotation of protein-coding genes in these short reads, and identification of their biological functions--faces daunting challenges because of the very sizes of the short read datasets. Results We developed a fast protein similarity search tool RAPSearch that utilizes a reduced amino acid alphabet and suffix array to detect seeds of flexible length. For short reads (translated in 6 frames) we tested, RAPSearch achieved ~20-90 times speedup as compared to BLASTX. RAPSearch missed only a small fraction (~1.3-3.2%) of BLASTX similarity hits, but it also discovered additional homologous proteins (~0.3-2.1%) that BLASTX missed. By contrast, BLAT, a tool that is even slightly faster than RAPSearch, had significant loss of sensitivity as compared to RAPSearch and BLAST. Conclusions RAPSearch is implemented as open-source software and is accessible at http://omics.informatics.indiana.edu/mg/RAPSearch. It enables faster protein similarity search. The application of RAPSearch in metageomics has also been demonstrated. PMID:21575167

  20. Acute anxiety impairs accuracy in identifying photographed faces.

    PubMed

    Attwood, Angela S; Penton-Voak, Ian S; Burton, A Mike; Munafò, Marcus R

    2013-08-01

    We investigated whether acutely induced anxiety modifies the ability to match photographed faces. Establishing the extent to which anxiety affects face-matching accuracy is important because of the relevance of face-matching performance to critical security-related applications. Participants (N = 28) completed the Glasgow Face Matching Test twice, once during a 20-min inhalation of medical air and once during a similar inhalation of air enriched with 7.5% CO2, which is a validated method for inducing acute anxiety. Anxiety degraded performance, but only with respect to hits, not false alarms. This finding provides further support for the dissociation between the ability to accurately identify a genuine match between faces and the ability to identify the lack of a match. Problems with the accuracy of facial identification are not resolved even when viewers are presented with a good photographic image of a face, and identification inaccuracy may be heightened when viewers are experiencing acute anxiety. PMID:23780726

  1. Considerations for using research data to verify clinical data accuracy.

    PubMed

    Fort, Daniel; Weng, Chunhua; Bakken, Suzanne; Wilcox, Adam B

    2014-01-01

    Collected to support clinical decisions and processes, clinical data may be subject to validity issues when used for research. The objective of this study is to examine methods and issues in summarizing and evaluating the accuracy of clinical data as compared to primary research data. We hypothesized that research survey data on a patient cohort could serve as a reference standard for uncovering potential biases in clinical data. We compared the summary statistics between clinical and research datasets. Seven clinical variables, i.e., height, weight, gender, ethnicity, systolic and diastolic blood pressure, and diabetes status, were included in the study. Our results show that the clinical data and research data had similar summary statistical profiles, but there are detectable differences in definitions and measurements for individual variables such as height, diastolic blood pressure, and diabetes status. We discuss the implications of these results and confirm the important considerations for using research data to verify clinical data accuracy. PMID:25717415

  2. Solving the apparent diversity-accuracy dilemma of recommender systems

    PubMed Central

    Zhou, Tao; Kuscsik, Zoltán; Liu, Jian-Guo; Medo, Matúš; Wakeling, Joseph Rushton; Zhang, Yi-Cheng

    2010-01-01

    Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations. PMID:20176968

  3. Solving the apparent diversity-accuracy dilemma of recommender systems.

    PubMed

    Zhou, Tao; Kuscsik, Zoltán; Liu, Jian-Guo; Medo, Matús; Wakeling, Joseph Rushton; Zhang, Yi-Cheng

    2010-03-01

    Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations. PMID:20176968

  4. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    PubMed

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically. PMID:22061323

  5. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  6. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  7. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  8. The NASA High Accuracy Fuel Flowmeter (HAFF) Development Program

    NASA Technical Reports Server (NTRS)

    Hobart, H. F.

    1983-01-01

    The high accuracy fuel flowmeter development program is described. A flightworthy meter that measures mass flowrate of aircraft fuels to within + or - 0.25% of reading over a 50:1 range of flow is developed. A study of measurement techniques to achieve this goal yielded three candidates: (1) a dual turbine flowmeter with density and viscosity compensation; (2) an angular momentum flowmeter with a motor-driven, spring-restrained turbine and viscosity shroud; and (3) a vortex precission flowmeter with density and viscosity compensation. An experimental study of each technique was completed and the first two candidates were selected for prototype development.

  9. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. PMID:26553748

  10. The Impact of Reading Achievement on Overall Academic Achievement

    ERIC Educational Resources Information Center

    Churchwell, Dawn Earheart

    2009-01-01

    This study examined the relationship between reading achievement and achievement in other subject areas. The purpose of this study was to determine if there was a correlation between reading scores as measured by the Standardized Test for the Assessment of Reading (STAR) and academic achievement in language arts, math, science, and social studies…

  11. Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement

    ERIC Educational Resources Information Center

    Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina

    2015-01-01

    The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…

  12. Predicting Mathematics Achievement: The Influence of Prior Achievement and Attitudes

    ERIC Educational Resources Information Center

    Hemmings, Brian; Grootenboer, Peter; Kay, Russell

    2011-01-01

    Achievement in mathematics is inextricably linked to future career opportunities, and therefore, understanding those factors that influence achievement is important. This study sought to examine the relationships among attitude towards mathematics, ability and mathematical achievement. This examination was also supported by a focus on gender…

  13. Evaluating the accuracy of molecular diagnostic testing for canine visceral leishmaniasis using latent class analysis.

    PubMed

    Solcà, Manuela da Silva; Bastos, Leila Andrade; Guedes, Carlos Eduardo Sampaio; Bordoni, Marcelo; Borja, Lairton Souza; Larangeira, Daniela Farias; da Silva Estrela Tuy, Pétala Gardênia; Amorim, Leila Denise Alves Ferreira; Nascimento, Eliane Gomes; de Sá Oliveira, Geraldo Gileno; dos-Santos, Washington Luis Conrado; Fraga, Deborah Bittencourt Mothé; Veras, Patrícia Sampaio Tavares

    2014-01-01

    Host tissues affected by Leishmania infantum have differing degrees of parasitism. Previously, the use of different biological tissues to detect L. infantum DNA in dogs has provided variable results. The present study was conducted to evaluate the accuracy of molecular diagnostic testing (qPCR) in dogs from an endemic area for canine visceral leishmaniasis (CVL) by determining which tissue type provided the highest rate of parasite DNA detection. Fifty-one symptomatic dogs were tested for CVL using serological, parasitological and molecular methods. Latent class analysis (LCA) was performed for accuracy evaluation of these methods. qPCR detected parasite DNA in 100% of these animals from at least one of the following tissues: splenic and bone marrow aspirates, lymph node and skin fragments, blood and conjunctival swabs. Using latent variable as gold standard, the qPCR achieved a sensitivity of 95.8% (CI 90.4-100) in splenic aspirate; 79.2% (CI 68-90.3) in lymph nodes; 77.3% (CI 64.5-90.1) in skin; 75% (CI 63.1-86.9) in blood; 50% (CI 30-70) in bone marrow; 37.5% (CI 24.2-50.8) in left-eye; and 29.2% (CI 16.7-41.6) in right-eye conjunctival swabs. The accuracy of qPCR using splenic aspirates was further evaluated in a random larger sample (n = 800), collected from dogs during a prevalence study. The specificity achieved by qPCR was 76.7% (CI 73.7-79.6) for splenic aspirates obtained from the greater sample. The sensitivity accomplished by this technique was 95% (CI 93.5-96.5) that was higher than those obtained for the other diagnostic tests and was similar to that observed in the smaller sampling study. This confirms that the splenic aspirate is the most effective type of tissue for detecting L. infantum infection. Additionally, we demonstrated that LCA could be used to generate a suitable gold standard for comparative CVL testing. PMID:25076494

  14. Evaluating the Accuracy of Molecular Diagnostic Testing for Canine Visceral Leishmaniasis Using Latent Class Analysis

    PubMed Central

    Solcà, Manuela da Silva; Bastos, Leila Andrade; Guedes, Carlos Eduardo Sampaio; Bordoni, Marcelo; Borja, Lairton Souza; Larangeira, Daniela Farias; da Silva Estrela Tuy, Pétala Gardênia; Amorim, Leila Denise Alves Ferreira; Nascimento, Eliane Gomes; de Sá Oliveira, Geraldo Gileno; dos-Santos, Washington Luis Conrado; Fraga, Deborah Bittencourt Mothé; Veras, Patrícia Sampaio Tavares

    2014-01-01

    Host tissues affected by Leishmania infantum have differing degrees of parasitism. Previously, the use of different biological tissues to detect L. infantum DNA in dogs has provided variable results. The present study was conducted to evaluate the accuracy of molecular diagnostic testing (qPCR) in dogs from an endemic area for canine visceral leishmaniasis (CVL) by determining which tissue type provided the highest rate of parasite DNA detection. Fifty-one symptomatic dogs were tested for CVL using serological, parasitological and molecular methods. Latent class analysis (LCA) was performed for accuracy evaluation of these methods. qPCR detected parasite DNA in 100% of these animals from at least one of the following tissues: splenic and bone marrow aspirates, lymph node and skin fragments, blood and conjunctival swabs. Using latent variable as gold standard, the qPCR achieved a sensitivity of 95.8% (CI 90.4–100) in splenic aspirate; 79.2% (CI 68–90.3) in lymph nodes; 77.3% (CI 64.5–90.1) in skin; 75% (CI 63.1–86.9) in blood; 50% (CI 30–70) in bone marrow; 37.5% (CI 24.2–50.8) in left-eye; and 29.2% (CI 16.7–41.6) in right-eye conjunctival swabs. The accuracy of qPCR using splenic aspirates was further evaluated in a random larger sample (n = 800), collected from dogs during a prevalence study. The specificity achieved by qPCR was 76.7% (CI 73.7–79.6) for splenic aspirates obtained from the greater sample. The sensitivity accomplished by this technique was 95% (CI 93.5–96.5) that was higher than those obtained for the other diagnostic tests and was similar to that observed in the smaller sampling study. This confirms that the splenic aspirate is the most effective type of tissue for detecting L. infantum infection. Additionally, we demonstrated that LCA could be used to generate a suitable gold standard for comparative CVL testing. PMID:25076494

  15. Accuracy of direct genomic values in Holstein bulls and cows using subsets of SNP markers

    PubMed Central

    2010-01-01

    Background At the current price, the use of high-density single nucleotide polymorphisms (SNP) genotyping assays in genomic selection of dairy cattle is limited to applications involving elite sires and dams. The objective of this study was to evaluate the use of low-density assays to predict direct genomic value (DGV) on five milk production traits, an overall conformation trait, a survival index, and two profit index traits (APR, ASI). Methods Dense SNP genotypes were available for 42,576 SNP for 2,114 Holstein bulls and 510 cows. A subset of 1,847 bulls born between 1955 and 2004 was used as a training set to fit models with various sets of pre-selected SNP. A group of 297 bulls born between 2001 and 2004 and all cows born between 1992 and 2004 were used to evaluate the accuracy of DGV prediction. Ridge regression (RR) and partial least squares regression (PLSR) were used to derive prediction equations and to rank SNP based on the absolute value of the regression coefficients. Four alternative strategies were applied to select subset of SNP, namely: subsets of the highest ranked SNP for each individual trait, or a single subset of evenly spaced SNP, where SNP were selected based on their rank for ASI, APR or minor allele frequency within intervals of approximately equal length. Results RR and PLSR performed very similarly to predict DGV, with PLSR performing better for low-density assays and RR for higher-density SNP sets. When using all SNP, DGV predictions for production traits, which have a higher heritability, were more accurate (0.52-0.64) than for survival (0.19-0.20), which has a low heritability. The gain in accuracy using subsets that included the highest ranked SNP for each trait was marginal (5-6%) over a common set of evenly spaced SNP when at least 3,000 SNP were used. Subsets containing 3,000 SNP provided more than 90% of the accuracy that could be achieved with a high-density assay for cows, and 80% of the high-density assay for young bulls

  16. Diagnostic accuracy of ultrasonography, MRI and MR arthrography in the characterisation of rotator cuff disorders: a systematic review and meta-analysis

    PubMed Central

    Roy, Jean-Sébastien; Braën, Caroline; Leblond, Jean; Desmeules, François; Dionne, Clermont E; MacDermid, Joy C; Bureau, Nathalie J; Frémont, Pierre

    2015-01-01

    Background Different diagnostic imaging modalities, such as ultrasonography (US), MRI, MR arthrography (MRA) are commonly used for the characterisation of rotator cuff (RC) disorders. Since the most recent systematic reviews on medical imaging, multiple diagnostic studies have been published, most using more advanced technological characteristics. The first objective was to perform a meta-analysis on the diagnostic accuracy of medical imaging for characterisation of RC disorders. Since US is used at the point of care in environments such as sports medicine, a secondary analysis assessed accuracy by radiologists and non-radiologists. Methods A systematic search in three databases was conducted. Two raters performed data extraction and evaluation of risk of bias independently, and agreement was achieved by consensus. Hierarchical summary receiver-operating characteristic package was used to calculate pooled estimates of included diagnostic studies. Results Diagnostic accuracy of US, MRI and MRA in the characterisation of full-thickness RC tears was high with overall estimates of sensitivity and specificity over 0.90. As for partial RC tears and tendinopathy, overall estimates of specificity were also high (>0.90), while sensitivity was lower (0.67–0.83). Diagnostic accuracy of US was similar whether a trained radiologist, sonographer or orthopaedist performed it. Conclusions Our results show the diagnostic accuracy of US, MRI and MRA in the characterisation of full-thickness RC tears. Since full thickness tear constitutes a key consideration for surgical repair, this is an important characteristic when selecting an imaging modality for RC disorder. When considering accuracy, cost, and safety, US is the best option. PMID:25677796

  17. Stability of similarity measurements for bipartite networks.

    PubMed

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on. PMID:26725688

  18. Documents Similarity Measurement Using Field Association Terms.

    ERIC Educational Resources Information Center

    Atlam, El-Sayed; Fuketa, M.; Morita, K.; Aoe, Jun-ichi

    2003-01-01

    Discussion of text analysis and information retrieval and measurement of document similarity focuses on a new text manipulation system called FA (field association)-Sim that is useful for retrieving information in large heterogeneous texts and for recognizing content similarity in text excerpts. Discusses recall and precision, automatic indexing…

  19. Marking Student Programs Using Graph Similarity

    ERIC Educational Resources Information Center

    Naude, Kevin A.; Greyling, Jean H.; Vogts, Dieter

    2010-01-01

    We present a novel approach to the automated marking of student programming assignments. Our technique quantifies the structural similarity between unmarked student submissions and marked solutions, and is the basis by which we assign marks. This is accomplished through an efficient novel graph similarity measure ("AssignSim"). Our experiments…

  20. Stability of similarity measurements for bipartite networks

    PubMed Central

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on. PMID:26725688

  1. Similarity Measures for Boolean Search Request Formulations.

    ERIC Educational Resources Information Center

    Radecki, Tadeusz

    1982-01-01

    Proposes a means for determining the similarity between search request formulations in online information retrieval systems, and discusses the use of similarity measures for clustering search formulations and document files in such systems. Experimental results using the proposed methods are presented in three tables. A reference list is provided.…

  2. Perceived Similarity, Proactive Adjustment, and Organizational Socialization

    ERIC Educational Resources Information Center

    Kammeyer-Mueller, John D.; Livingston, Beth A.; Liao, Hui

    2011-01-01

    The present study explores how perceived demographic and attitudinal similarity can influence proactive behavior among organizational newcomers. We propose that newcomers who perceive themselves as similar to their co-workers will be more willing to seek new information or build relationships, which in turn will lead to better long-term…

  3. Attitude Similarity, Topic Importance, and Psychotherapeutic Attraction

    ERIC Educational Resources Information Center

    Cheney, Thomas

    1975-01-01

    The effect of attitude similarity and topic importance on attraction was studied by exposing 75 prison inmates, incarcerated for public intoxication, to varying attitudes of a psychotherapist. Subjects were more attracted to the therapist after receiving alcohol items regardless of degree of similarity expressed. (Author)

  4. Some Effects of Similarity Self-Disclosure

    ERIC Educational Resources Information Center

    Murphy, Kevin C.; Strong, Stanley R.

    1972-01-01

    College males were interviewed about how college had altered their friendships, values, and plans. The interviewers diclosed experiences and feelings similar to those revealed by the students. Results support Byrne's Law of Similarity in generating interpersonal attraction in the interview and suggest that the timing of self-disclosures is…

  5. Stability of similarity measurements for bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Hou, Lei; Pan, Xue; Guo, Qiang; Zhou, Tao

    2016-01-01

    Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on.

  6. Geometric accuracy in airborne SAR images

    NASA Technical Reports Server (NTRS)

    Blacknell, D.; Quegan, S.; Ward, I. A.; Freeman, A.; Finley, I. P.

    1989-01-01

    Uncorrected across-track motions of a synthetic aperture radar (SAR) platform can cause both a severe loss of azimuthal positioning accuracy in, and defocusing of, the resultant SAR image. It is shown how the results of an autofocus procedure can be incorporated in the azimuth processing to produce a fully focused image that is geometrically accurate in azimuth. Range positioning accuracy is also discussed, leading to a comprehensive treatment of all aspects of geometric accuracy. The system considered is an X-band SAR.

  7. High accuracy calibration of the fiber spectroradiometer

    NASA Astrophysics Data System (ADS)

    Wu, Zhifeng; Dai, Caihong; Wang, Yanfei; Chen, Binhua

    2014-11-01

    Comparing to the big-size scanning spectroradiometer, the compact and convenient fiber spectroradiometer is widely used in various kinds of fields, such as the remote sensing, aerospace monitoring, and solar irradiance measurement. High accuracy calibration should be made before the use, which involves the wavelength accuracy, the background environment noise, the nonlinear effect, the bandwidth, the stray light and et al. The wavelength lamp and tungsten lamp are frequently used to calibration the fiber spectroradiometer. The wavelength difference can be easily reduced through the software or calculation. However, the nonlinear effect and the bandwidth always can affect the measurement accuracy significantly.

  8. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  9. Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

    PubMed Central

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A.

    2014-01-01

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments. PMID:25375757

  10. Improving Delivery Accuracy of Stereotactic Body Radiotherapy to a Moving Tumor Using Simplified Volumetric Modulated Arc Therapy

    PubMed Central

    Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong

    2016-01-01

    Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199

  11. Criteria for dynamic similarity in bouncing gaits.

    PubMed

    Bullimore, Sharon R; Donelan, J Maxwell

    2008-01-21

    Animals of different sizes tend to move in a dynamically similar manner when travelling at speeds corresponding to equal values of a dimensionless parameter (DP) called the Froude number. Consequently, the Froude number has been widely used for defining equivalent speeds and predicting speeds of locomotion by extinct species and on other planets. However, experiments using simulated reduced gravity have demonstrated that equality of the Froude number does not guarantee dynamic similarity. This has cast doubt upon the usefulness of the Froude number in locomotion research. Here we use dimensional analysis of the planar spring-mass model, combined with Buckingham's Pi-Theorem, to demonstrate that four DPs must be equal for dynamic similarity in bouncing gaits such as trotting, hopping and bipedal running. This can be reduced to three DPs by applying the constraint of maintaining a constant average speed of locomotion. Sensitivity analysis indicates that all of these DPs are important for predicting dynamic similarity. We show that the reason humans do not run in a dynamically similar manner at equal Froude number in different levels of simulated reduced gravity is that dimensionless leg stiffness decreases as gravity increases. The reason that the Froude number can predict dynamic similarity in Earth gravity is that dimensionless leg stiffness and dimensionless vertical landing speed are both independent of size. In conclusion, although equal Froude number is not sufficient for dynamic similarity, it is a necessary condition. Therefore, to detect fundamental differences in locomotion, animals of different sizes should be compared at equal Froude number, so that they can be as close to dynamic similarity as possible. More generally, the concept of dynamic similarity provides a powerful framework within which similarities and differences in locomotion can be interpreted. PMID:17983630

  12. Vibrationally averaged post Born-Oppenheimer isotopic dipole moment calculations approaching spectroscopic accuracy

    NASA Astrophysics Data System (ADS)

    Arapiraca, A. F. C.; Jonsson, Dan; Mohallem, J. R.

    2011-12-01

    We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10-4 debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values.

  13. Vibrationally averaged post Born-Oppenheimer isotopic dipole moment calculations approaching spectroscopic accuracy.

    PubMed

    Arapiraca, A F C; Jonsson, Dan; Mohallem, J R

    2011-12-28

    We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10(-4) debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values. PMID:22225162

  14. Empathic Embarrassment Accuracy in Autism Spectrum Disorder.

    PubMed

    Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G

    2015-06-01

    Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043

  15. Coding accuracy on the psychophysical scale

    PubMed Central

    Kostal, Lubomir; Lansky, Petr

    2016-01-01

    Sensory neurons are often reported to adjust their coding accuracy to the stimulus statistics. The observed match is not always perfect and the maximal accuracy does not align with the most frequent stimuli. As an alternative to a physiological explanation we show that the match critically depends on the chosen stimulus measurement scale. More generally, we argue that if we measure the stimulus intensity on the scale which is proportional to the perception intensity, an improved adjustment in the coding accuracy is revealed. The unique feature of stimulus units based on the psychophysical scale is that the coding accuracy can be meaningfully compared for different stimuli intensities, unlike in the standard case of a metric scale. PMID:27021783

  16. Measuring the Accuracy of Diagnostic Systems.

    ERIC Educational Resources Information Center

    Swets, John A.

    1988-01-01

    Discusses the relative operating characteristic analysis of signal detection theory as a measure of diagnostic accuracy. Reports representative values of this measure in several fields. Compares how problems in these fields are handled. (CW)

  17. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  18. Nonverbal self-accuracy in interpersonal interaction.

    PubMed

    Hall, Judith A; Murphy, Nora A; Mast, Marianne Schmid

    2007-12-01

    Four studies measure participants' accuracy in remembering, without forewarning, their own nonverbal behavior after an interpersonal interaction. Self-accuracy for smiling, nodding, gazing, hand gesturing, and self-touching is scored by comparing the participants' recollections with coding based on videotape. Self-accuracy is above chance and of modest magnitude on average. Self-accuracy is greatest for smiling; intermediate for nodding, gazing, and gesturing; and lowest for self-touching. It is higher when participants focus attention away from the self (learning as much as possible about the partner, rearranging the furniture in the room, evaluating the partner, smiling and gazing at the partner) than when participants are more self-focused (getting acquainted, trying to make a good impression on the partner, being evaluated by the partner, engaging in more self-touching). The contributions of cognitive demand and affective state are discussed. PMID:18000102

  19. Two perspectives on similarity between words

    NASA Astrophysics Data System (ADS)

    Frisch, Stefan A.

    2003-10-01

    This presentation examines the similarity between words from both bottom up (phonetic) and top down (phonological/psycholinguistic) perspectives. From the phonological perspective, the influence of structure on similarity is explored using metalinguistic acceptability judgments for multisyllabic nonwords. Results from an experiment suggest that subjects try to align novel words with known words in order to maximize similarities while minimizing dissimilarities. This finding parallels results from psychology on similarity judgments for visual scenes. From the phonetic perspective, the influence of similar gestures on speech error rates is examined using ultrasound measurement of tongue position. In a pilot experiment, subjects, produced tongue twisters containing words where onset and vowel phonemes had similar gestures (e.g., tip, comb) and where the onset and vowel had dissimilar gestures (e.g., tube, keep). Preliminary results suggest that misarticulations are more frequent in the context of dissimilar gestures (e.g., in the tongue twister tip cape keep tape, error rates are higher for /k/ than /t/). These errors appear to be gestural interactions rather than errors at the phonemic or featural level of phonological spellout. Together, these two experiments indicate that similarity relations between words are found at multiple levels, any which are potentially relevant to the structure of phonological systems.

  20. The baryonic self similarity of dark matter

    SciTech Connect

    Alard, C.

    2014-06-20

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M {sup 1/4}. These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  1. Measure of Node Similarity in Multilayer Networks

    PubMed Central

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H.; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  2. Gait Signal Analysis with Similarity Measure

    PubMed Central

    Shin, Seungsoo

    2014-01-01

    Human gait decision was carried out with the help of similarity measure design. Gait signal was selected through hardware implementation including all in one sensor, control unit, and notebook with connector. Each gait signal was considered as high dimensional data. Therefore, high dimensional data analysis was considered via heuristic technique such as the similarity measure. Each human pattern such as walking, sitting, standing, and stepping up was obtained through experiment. By the results of the analysis, we also identified the overlapped and nonoverlapped data relation, and similarity measure analysis was also illustrated, and comparison with conventional similarity measure was also carried out. Hence, nonoverlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considered high dimensional data analysis was designed with consideration of neighborhood information. Proposed similarity measure was applied to identify the behavior patterns of different persons, and different behaviours of the same person. Obtained analysis can be extended to organize health monitoring system for specially elderly persons. PMID:25110724

  3. Measure of Node Similarity in Multilayer Networks.

    PubMed

    Mollgaard, Anders; Zettler, Ingo; Dammeyer, Jesper; Jensen, Mogens H; Lehmann, Sune; Mathiesen, Joachim

    2016-01-01

    The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships. PMID:27300084

  4. Fitting magnetic field gradient with Heisenberg-scaling accuracy

    PubMed Central

    Zhang, Yong-Liang; Wang, Huan; Jing, Li; Mu, Liang-Zhu; Fan, Heng

    2014-01-01

    The linear function is possibly the simplest and the most used relation appearing in various areas of our world. A linear relation can be generally determined by the least square linear fitting (LSLF) method using several measured quantities depending on variables. This happens for such as detecting the gradient of a magnetic field. Here, we propose a quantum fitting scheme to estimate the magnetic field gradient with N-atom spins preparing in W state. Our scheme combines the quantum multi-parameter estimation and the least square linear fitting method to achieve the quantum Cramér-Rao bound (QCRB). We show that the estimated quantity achieves the Heisenberg-scaling accuracy. Our scheme of quantum metrology combined with data fitting provides a new method in fast high precision measurements. PMID:25487218

  5. [Research Reports on Academic Achievement.

    ERIC Educational Resources Information Center

    Latts, Sander; And Others

    1969-01-01

    Four counselors studied the relation between achievement and choice of major, achievement and motivation, counseling and motivation, and achievement and employment. To see if those with definite majors or career choices in mind did better than those without, 300 students were tested according to the certainty of their choice. No significant…

  6. Cherokee Culture and School Achievement.

    ERIC Educational Resources Information Center

    Brown, Anthony D.

    1980-01-01

    Compares the effect of cooperative and competitive behaviors of Cherokee and Anglo American elementary school students on academic achievement. Suggests changes in teaching techniques and lesson organization that might raise academic achievement while taking into consideration tribal traditions that limit scholastic achievement in an…

  7. How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy

    PubMed Central

    Weaver, Steven A.; Ucar, Zennure; Bettinger, Pete; Merry, Krista

    2015-01-01

    understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection. PMID:25923667

  8. Improved personalized recommendation based on a similarity network

    NASA Astrophysics Data System (ADS)

    Wang, Ximeng; Liu, Yun; Xiong, Fei

    2016-08-01

    A recommender system helps individual users find the preferred items rapidly and has attracted extensive attention in recent years. Many successful recommendation algorithms are designed on bipartite networks, such as network-based inference or heat conduction. However, most of these algorithms define the resource-allocation methods for an average allocation. That is not reasonable because average allocation cannot indicate the user choice preference and the influence between users which leads to a series of non-personalized recommendation results. We propose a personalized recommendation approach that combines the similarity function and bipartite network to generate a similarity network that improves the resource-allocation process. Our model introduces user influence into the recommender system and states that the user influence can make the resource-allocation process more reasonable. We use four different metrics to evaluate our algorithms for three benchmark data sets. Experimental results show that the improved recommendation on a similarity network can obtain better accuracy and diversity than some competing approaches.

  9. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  10. Kodak DCS200: a camera for high-accuracy measurements?

    NASA Astrophysics Data System (ADS)

    Gruen, Armin; Maas, Hans-Gerd; Keller, Andrea

    1995-09-01

    The digital high-resolution stillvideo camera Kodak DCS200 has reached a high degree of popularity among photogrammetrists within a very short time. Consisting of a mirror reflex camera, a high resolution CCD sensor, A/D conversion, power supply, and data storage capacity for 50 images, it can basically be considered a comfortable, autonomous device for digital image data acquisition, especially for industrial applications and for architectural photogrammetry. First tests of the camera showed a high precision potential: 1/20-1/30 pixel in image space could be achieved in several applications, and with large self-calibrating networks relative precisions of 1:100,000 and better have been reported. To be able to make more detailed statements on the accuracy potential of the camera, a thorough accuracy test was performed at ETH Zurich by taking 150 images of a 186 target 3D testfield. Although the precision estimates of this large block were exceptionally good, strong systematic object deformations were found in comparison with theodolite-measured reference coordinates of the testfield points. The reasons for these deformations are most probably temporal instabilities of some camera parameters, which could make the use of this camera very problematic for high accuracy applications. It is argued that these instabilities are caused by the weak fixture of the CCD-chip to the camera body. In this context it is often overlooked that this camera was not developed for precise measurement applications but rather for professional photographers.

  11. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  12. Prediction of OCR accuracy using simple image features

    SciTech Connect

    Blando, L.R.; Kanai, Junichi; Nartker, T.A.

    1995-04-01

    A classifier for predicting the character accuracy of a given page achieved by any Optical Character Recognition (OCR) system is presented. This classifier is based on measuring the amount of white speckle, the amount of character fragments, and overall size information in the page. No output from the OCR system is used. The given page is classified as either good quality (i.e., high OCR accuracy expected) or poor (i.e., low OCR accuracy expected). Six OCR systems processed two different sets of test data: a set of 439 pages obtained from technical and scientific documents and a set of 200 pages obtained from magazines. For every system, approximately 85% of the pages in each data set were correctly predicted. The performance of this classifier is also compared with the ideal-case performance of a prediction method based upon the number of reject markers in OCR generated text. In several cases, this method matched or exceeded the performance of the reject based approach.

  13. High-accuracy particle sizing by interferometric particle imaging

    NASA Astrophysics Data System (ADS)

    Qieni, Lü; Wenhua, Jin; Tong, Lü; Xiang, Wang; Yimo, Zhang

    2014-02-01

    A method of high-accuracy estimation of fringes number/fringes frequency of interferogram based on erosion match and the Fourier transform technique is proposed. The edge images of the interference pattern of particles and the particle mask image are detected respectively by erosion operating firstly and then subtracted with the respective original image, and the center coordinate of particles can be extracted through the 2D correlation operation for the two edge images obtained. The interference pattern of each particle can then be achieved using the center coordinate, the shape and size of the particle image. The number of fringes/fringe spacing of the interferogram of the particle is extracted by Fourier transform and the modified Rife algorithm, and sub-pixel accuracy of the extracted frequency is acquired. Its performance is demonstrated by numerical simulation and experimental measurement. The measurement uncertainty is ±0.91 μm and the relative error 1.13% for the standard particle of diameter 45 μm. The research results show that the algorithm presented boasts high accuracy for particle sizing as well as location measurement.

  14. Students' Achievement Goals, Learning-Related Emotions and Academic Achievement.

    PubMed

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836

  15. Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement

    PubMed Central

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836

  16. Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.

    2012-07-01

    This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.

  17. Interpersonal attraction and personality: what is attractive--self similarity, ideal similarity, complementarity or attachment security?

    PubMed

    Klohnen, Eva C; Luo, Shanhong

    2003-10-01

    Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator. PMID:14561124

  18. Evaluating Similarity Measures for Brain Image Registration

    PubMed Central

    Razlighi, Q. R.; Kehtarnavaz, N.; Yousefi, S.

    2013-01-01

    Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration. PMID:24039378

  19. Similarity Theory of Withdrawn Water Temperature Experiment

    PubMed Central

    2015-01-01

    Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020

  20. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    EPA Science Inventory

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  1. Similarity and singularity in adhesive elastohydrodynamic touchdown

    NASA Astrophysics Data System (ADS)

    Carlson, Andreas; Mahadevan, L.

    2016-01-01

    We consider the dynamics of an elastic sheet as it starts to adhere to a wall, a process that is limited by the viscous squeeze flow of the intervening liquid. Elastohydrodynamic lubrication theory allows us to derive a partial differential equation coupling the elastic deformation of the sheet, the microscopic van der Waals adhesion, and viscous thin film flow. We use a combination of numerical simulations of the governing equation and a scaling analysis to describe the self-similar touchdown of the sheet as it approaches the wall. An analysis of the equation in terms of similarity variables in the vicinity of the touchdown event shows that only the fundamental similarity solution is observed in the time-dependent numerical simulations, consistent with the fact that it alone is stable. Our analysis generalizes similar approaches for rupture in capillary thin film hydrodynamics and suggests experimentally verifiable predictions for a new class of singular flows linking elasticity, hydrodynamics, and adhesion.

  2. On self-similarity of crack layer

    NASA Technical Reports Server (NTRS)

    Botsis, J.; Kunin, B.

    1987-01-01

    The crack layer (CL) theory of Chudnovsky (1986), based on principles of thermodynamics of irreversible processes, employs a crucial hypothesis of self-similarity. The self-similarity hypothesis states that the value of the damage density at a point x of the active zone at a time t coincides with that at the corresponding point in the initial (t = 0) configuration of the active zone, the correspondence being given by a time-dependent affine transformation of the space variables. In this paper, the implications of the self-similarity hypothesis for qusi-static CL propagation is investigated using polystyrene as a model material and examining the evolution of damage distribution along the trailing edge which is approximated by a straight segment perpendicular to the crack path. The results support the self-similarity hypothesis adopted by the CL theory.

  3. Media segmentation using self-similarity decomposition

    NASA Astrophysics Data System (ADS)

    Foote, Jonathan T.; Cooper, Matthew L.

    2003-01-01

    We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity.The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step,segments are clustered based on the self-similarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music is summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.

  4. Self-similarity in Laplacian growth

    SciTech Connect

    Mineev-weinstein, Mark; Zabrodin, Anton; Abanov, Artem

    2008-01-01

    We consider Laplacian Growth of self-similar domains in different geometries. Self-similarity determines the analytic structure of the Schwarz function of the moving boundary. The knowledge of this analytic structure allows us to derive the integral equation for the conformal map. It is shown that solutions to the integral equation obey also a second-order differential equation which is the 1D Schroedinger equation with the sinh{sup -2}-potential. The solutions, which are expressed through the Gauss hypergeometric function, characterize the geometry of self-similar patterns in a wedge. We also find the potential for the Coulomb gas representation of the self-similar Laplacian growth in a wedge and calculate the corresponding free energy.

  5. Self-similarity in active colloid motion

    NASA Astrophysics Data System (ADS)

    Constant, Colin; Sukhov, Sergey; Dogariu, Aristide

    The self-similarity of displacements among randomly evolving systems has been used to describe the foraging patterns of animals and predict the growth of financial systems. At micron scales, the motion of colloidal particles can be analyzed by sampling their spatial displacement in time. For self-similar systems in equilibrium, the mean squared displacement increases linearly in time. However, external forces can take the system out of equilibrium, creating active colloidal systems, and making this evolution more complex. A moment scaling spectrum of the distribution of particle displacements quantifies the degree of self-similarity in the colloid motion. We will demonstrate that, by varying the temporal and spatial characteristics of the external forces, one can control the degree of self-similarity in active colloid motion.

  6. Interpersonal Congruency, Attitude Similarity, and Interpersonal Attraction

    ERIC Educational Resources Information Center

    Touhey, John C.

    1975-01-01

    As no experimental study has examined the effects of congruency on attraction, the present investigation orthogonally varied attitude similarity and interpersonal congruency in order to compare the two independent variables as determinants of interpersonal attraction. (Author/RK)

  7. Identification and classification of similar looking food grains

    NASA Astrophysics Data System (ADS)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  8. Stimulus-based similarity and the recognition of spoken words

    NASA Astrophysics Data System (ADS)

    Auer, Edward T.

    2003-10-01

    Spoken word recognition has been hypothesized to be achieved via a competitive process amongst perceptually similar lexical candidates in the mental lexicon. In this process, lexical candidates are activated as a function of their perceived similarity to the spoken stimulus. The evidence supporting this hypothesis has largely come from studies of auditory word recognition. In this talk, evidence from our studies of visual spoken word recognition will be reviewed. Visual speech provides the opportunity to highlight the importance of stimulus-driven perceptual similarity because it presents a different pattern of segmental similarity than is afforded by auditory speech degraded by noise. Our results are consistent with stimulus-driven activation followed by competition as general spoken word recognition mechanism. In addition, results will be presented from recent investigations of the direct prediction of perceptual similarity from measurements of spoken stimuli. High levels of correlation have been observed between the predicted and perceptually obtained distances for a large set of spoken consonants. These results support the hypothesis that the perceptual structure of English consonants and vowels is predicted by stimulus structure without the need for an intervening level of abstract linguistic representation. [Research supported by NSF IIS 9996088 and NIH DC04856.

  9. Efficient Set Similarity Joins Using Min-prefixes

    NASA Astrophysics Data System (ADS)

    Ribeiro, Leonardo A.; Härder, Theo

    Identification of all objects in a dataset whose similarity is not less than a specified threshold is of major importance for management, search, and analysis of data. Set similarity joins are commonly used to implement this operation; they scale to large datasets and are versatile to represent a variety of similarity notions. Most set similarity join methods proposed so far present two main phases at a high level of abstraction: candidate generation producing a set of candidate pairs and verification applying the actual similarity measure to the candidates and returning the correct answer. Previous work has primarily focused on the reduction of candidates, where candidate generation presented the major effort to obtain better pruning results. Here, we propose an opposite approach. We drastically decrease the computational cost of candidate generation by dynamically reducing the number of indexed objects at the expense of increasing the workload of the verification phase. Our experimental findings show that this trade-off is advantageous: we consistently achieve substantial speed-ups as compared to previous algorithms.

  10. Mechanisms for similarity matching in disparity measurement

    PubMed Central

    Goutcher, Ross; Hibbard, Paul B.

    2014-01-01

    Early neural mechanisms for the measurement of binocular disparity appear to operate in a manner consistent with cross-correlation-like processes. Consequently, cross-correlation, or cross-correlation-like procedures have been used in a range of models of disparity measurement. Using such procedures as the basis for disparity measurement creates a preference for correspondence solutions that maximize the similarity between local left and right eye image regions. Here, we examine how observers’ perception of depth in an ambiguous stereogram is affected by manipulations of luminance and orientation-based image similarity. Results show a strong effect of coarse-scale luminance similarity manipulations, but a relatively weak effect of finer-scale manipulations of orientation similarity. This is in contrast to the measurements of depth obtained from a standard cross-correlation model. This model shows strong effects of orientation similarity manipulations and weaker effects of luminance similarity. In order to account for these discrepancies, the standard cross-correlation approach may be modified to include an initial spatial frequency filtering stage. The performance of this adjusted model most closely matches human psychophysical data when spatial frequency filtering favors coarser scales. This is consistent with the operation of disparity measurement processes where spatial frequency and disparity tuning are correlated, or where disparity measurement operates in a coarse-to-fine manner. PMID:24409163

  11. Predicting Odor Perceptual Similarity from Odor Structure

    PubMed Central

    Weiss, Tali; Frumin, Idan; Khan, Rehan M.; Sobel, Noam

    2013-01-01

    To understand the brain mechanisms of olfaction we must understand the rules that govern the link between odorant structure and odorant perception. Natural odors are in fact mixtures made of many molecules, and there is currently no method to look at the molecular structure of such odorant-mixtures and predict their smell. In three separate experiments, we asked 139 subjects to rate the pairwise perceptual similarity of 64 odorant-mixtures ranging in size from 4 to 43 mono-molecular components. We then tested alternative models to link odorant-mixture structure to odorant-mixture perceptual similarity. Whereas a model that considered each mono-molecular component of a mixture separately provided a poor prediction of mixture similarity, a model that represented the mixture as a single structural vector provided consistent correlations between predicted and actual perceptual similarity (r≥0.49, p<0.001). An optimized version of this model yielded a correlation of r = 0.85 (p<0.001) between predicted and actual mixture similarity. In other words, we developed an algorithm that can look at the molecular structure of two novel odorant-mixtures, and predict their ensuing perceptual similarity. That this goal was attained using a model that considers the mixtures as a single vector is consistent with a synthetic rather than analytical brain processing mechanism in olfaction. PMID:24068899

  12. Hydrologic similarity, comparative hydrology and hydrologic extremes

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Laaha, G.; Koffler, D.; Singh, R.

    2012-04-01

    Recent years have brought a renewed focus on the issue of hydrologic similarity. What makes two catchments similar and what can we do with this understanding? The reason for this issue being so important lies at least partially in the need for generalization of results in a scientific field, which is limited through the large heterogeneity in our environment. The issue of hydrologic similarity is of course as old as hydrology itself, however, we believe that taking stock is needed from time to time to guide comparative hydrology efforts that have the potential to bring structure into the field of catchment hydrology. Apart from that, catchment similarity is the rational behind any attempt of predicting streamflow at ungauged basins, and a better understanding and definition of hydrologic similarity will enhance our ability to estimate water resources in absence of stream gauges. In this talk we focus on signatures of hydrologic extremes, i.e. flood and low flow characteristics of streamflow. Can similarity concepts relate catchment behavior under both high and low flow extremes? In how far do our understanding and our predictive capability regarding hydrologic extremes benefit from a holistic few of individual catchments, and from a comparative analysis between catchment? We will review different studies and present a meta analysis to highlight the proven and the potential benefit of taking a broader view.

  13. Magnus expansion and in-medium similarity renormalization group

    NASA Astrophysics Data System (ADS)

    Morris, T. D.; Parzuchowski, N. M.; Bogner, S. K.

    2015-09-01

    We present an improved variant of the in-medium similarity renormalization group (IM-SRG) based on the Magnus expansion. In the new formulation, one solves flow equations for the anti-Hermitian operator that, upon exponentiation, yields the unitary transformation of the IM-SRG. The resulting flow equations can be solved using a first-order Euler method without any loss of accuracy, resulting in substantial memory savings and modest computational speedups. Since one obtains the unitary transformation directly, the transformation of additional operators beyond the Hamiltonian can be accomplished with little additional cost, in sharp contrast to the standard formulation of the IM-SRG. Ground state calculations of the homogeneous electron gas (HEG) and 16O nucleus are used as test beds to illustrate the efficacy of the Magnus expansion.

  14. Low-dose cardiac imaging: reducing exposure but not accuracy.

    PubMed

    Small, Gary R; Chow, Benjamin J W; Ruddy, Terrence D

    2012-01-01

    Cardiac imaging techniques that use ionizing radiation have become an integral part of current cardiology practice. However, concern has arisen that ionizing radiation exposure, even at the low levels used for medical imaging, is associated with the risk of cancer. From a single diagnostic cardiac imaging procedure, such risks are low. On a population basis, however, malignancies become more likely on account of stochastic effects being more probable as the number of procedures performed increases. In light of this, and owing to professional and industrial commitment to the as low as reasonably achievable (ALARA) principle, over the last decade major strides have been made to reduce radiation dose in cardiac imaging. Dose-reduction strategies have been most pronounced in cardiac computed tomography. This was important since computed tomography has rapidly become a widely used diagnostic alternative to invasive coronary angiography, and initial protocols were associated with relatively high radiation exposures. Advances have also been made in nuclear cardiology and in invasive coronary angiography, and these reductions in patient exposure have all been achieved with maintenance of image quality and accuracy. Improvements in imaging camera technology, image acquisition protocols and image processing have lead to reductions in patient radiation exposure without compromising imaging diagnostic accuracy. PMID:22149528

  15. Achievement as Resistance: The Development of a Critical Race Achievement Ideology among Black Achievers

    ERIC Educational Resources Information Center

    Carter, Dorinda J.

    2008-01-01

    In this article, Dorinda Carter examines the embodiment of a critical race achievement ideology in high-achieving black students. She conducted a yearlong qualitative investigation of the adaptive behaviors that nine high-achieving black students developed and employed to navigate the process of schooling at an upper-class, predominantly white,…

  16. Data supporting the high-accuracy haplotype imputation using unphased genotype data as the references.

    PubMed

    Li, Wenzhi; Xu, Wei; He, Shaohua; Ma, Li; Song, Qing

    2016-09-01

    The data presented in this article is related to the research article entitled "High-accuracy haplotype imputation using unphased genotype data as the references" which reports the unphased genotype data can be used as reference for haplotyping imputation [1]. This article reports different implementation generation pipeline, the results of performance comparison between different implementations (A, B, and C) and between HiFi and three major imputation software tools. Our data showed that the performances of these three implementations are similar on accuracy, in which the accuracy of implementation-B is slightly but consistently higher than A and C. HiFi performed better on haplotype imputation accuracy and three other software performed slightly better on genotype imputation accuracy. These data may provide a strategy for choosing optimal phasing pipeline and software for different studies. PMID:27595130

  17. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  18. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution. PMID:27494227

  19. Accuracy of stream habitat interpolations across spatial scales

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart

    2013-01-01

    Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.

  20. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  1. Prefrontal consolidation supports the attainment of fear memory accuracy

    PubMed Central

    Vieira, Philip A.; Lovelace, Jonathan W.; Corches, Alex; Rashid, Asim J.; Josselyn, Sheena A.

    2014-01-01

    The neural mechanisms underlying the attainment of fear memory accuracy for appropriate discriminative responses to aversive and nonaversive stimuli are unclear. Considerable evidence indicates that coactivator of transcription and histone acetyltransferase cAMP response element binding protein (CREB) binding protein (CBP) is critically required for normal neural function. CBP hypofunction leads to severe psychopathological symptoms in human and cognitive abnormalities in genetic mutant mice with severity dependent on the neural locus and developmental time of the gene inactivation. Here, we showed that an acute hypofunction of CBP in the medial prefrontal cortex (mPFC) results in a disruption of fear memory accuracy in mice. In addition, interruption of CREB function in the mPFC also leads to a deficit in auditory discrimination of fearful stimuli. While mice with deficient CBP/CREB signaling in the mPFC maintain normal responses to aversive stimuli, they exhibit abnormal responses to similar but nonrelevant stimuli when compared to control animals. These data indicate that improvement of fear memory accuracy involves mPFC-dependent suppression of fear responses to nonrelevant stimuli. Evidence from a context discriminatory task and a newly developed task that depends on the ability to distinguish discrete auditory cues indicated that CBP-dependent neural signaling within the mPFC circuitry is an important component of the mechanism for disambiguating the meaning of fear signals with two opposing values: aversive and nonaversive. PMID:25031365

  2. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  3. Activity-relevant similarity values for fingerprints and implications for similarity searching

    PubMed Central

    Jasial, Swarit; Hu, Ye; Vogt, Martin; Bajorath, Jürgen

    2016-01-01

    A largely unsolved problem in chemoinformatics is the issue of how calculated compound similarity relates to activity similarity, which is central to many applications. In general, activity relationships are predicted from calculated similarity values. However, there is no solid scientific foundation to bridge between calculated molecular and observed activity similarity. Accordingly, the success rate of identifying new active compounds by similarity searching is limited. Although various attempts have been made to establish relationships between calculated fingerprint similarity values and biological activities, none of these has yielded generally applicable rules for similarity searching. In this study, we have addressed the question of molecular versus activity similarity in a more fundamental way. First, we have evaluated if activity-relevant similarity value ranges could in principle be identified for standard fingerprints and distinguished from similarity resulting from random compound comparisons. Then, we have analyzed if activity-relevant similarity values could be used to guide typical similarity search calculations aiming to identify active compounds in databases. It was found that activity-relevant similarity values can be identified as a characteristic feature of fingerprints. However, it was also shown that such values cannot be reliably used as thresholds for practical similarity search calculations. In addition, the analysis presented herein helped to rationalize differences in fingerprint search performance. PMID:27127620

  4. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  5. Similarity Metrics for Closed Loop Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Yang, Lee C.; Bedrossian, Naz; Hall, Robert A.

    2008-01-01

    To what extent and in what ways can two closed-loop dynamic systems be said to be "similar?" This question arises in a wide range of dynamic systems modeling and control system design applications. For example, bounds on error models are fundamental to the controller optimization with modern control design methods. Metrics such as the structured singular value are direct measures of the degree to which properties such as stability or performance are maintained in the presence of specified uncertainties or variations in the plant model. Similarly, controls-related areas such as system identification, model reduction, and experimental model validation employ measures of similarity between multiple realizations of a dynamic system. Each area has its tools and approaches, with each tool more or less suited for one application or the other. Similarity in the context of closed-loop model validation via flight test is subtly different from error measures in the typical controls oriented application. Whereas similarity in a robust control context relates to plant variation and the attendant affect on stability and performance, in this context similarity metrics are sought that assess the relevance of a dynamic system test for the purpose of validating the stability and performance of a "similar" dynamic system. Similarity in the context of system identification is much more relevant than are robust control analogies in that errors between one dynamic system (the test article) and another (the nominal "design" model) are sought for the purpose of bounding the validity of a model for control design and analysis. Yet system identification typically involves open-loop plant models which are independent of the control system (with the exception of limited developments in closed-loop system identification which is nonetheless focused on obtaining open-loop plant models from closed-loop data). Moreover the objectives of system identification are not the same as a flight test and

  6. Accuracy of polyp localization at colonoscopy

    PubMed Central

    O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald

    2016-01-01

    Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of

  7. Academic Achievement in Children With Oral Clefts Versus Unaffected Siblings

    PubMed Central

    Wehby, George L.; Barron, Sheila; Romitti, Paul A.; Ansley, Timothy N.; Speltz, Matthew L.

    2014-01-01

    Objective To compare academic achievement in children with oral-facial clefts (OFC) with their unaffected siblings. Methods 256 children with OFC were identified from the Iowa Registry for Congenital and Inherited Disorders, and 387 unaffected siblings were identified from birth certificates. These data were linked to Iowa Testing Programs achievement data. We compared academic achievement in children with OFC with their unaffected siblings using linear regression models, adjusted for potential confounders. In post hoc analyses, we explored modifiers of siblings’ academic performance. Results Achievement scores were similar between children with OFC and their siblings. Children with cleft palate only were significantly more likely to use special education than their unaffected siblings. Siblings’ academic achievement was inversely related to distance in birth order and age from the affected child. Conclusion Children with OFC and their siblings received similar achievement scores. Younger siblings, in particular, may share a vulnerability to poor academic outcomes. PMID:24993102

  8. Self-Similar Compressible Free Vortices

    NASA Technical Reports Server (NTRS)

    vonEllenrieder, Karl

    1998-01-01

    Lie group methods are used to find both exact and numerical similarity solutions for compressible perturbations to all incompressible, two-dimensional, axisymmetric vortex reference flow. The reference flow vorticity satisfies an eigenvalue problem for which the solutions are a set of two-dimensional, self-similar, incompressible vortices. These solutions are augmented by deriving a conserved quantity for each eigenvalue, and identifying a Lie group which leaves the reference flow equations invariant. The partial differential equations governing the compressible perturbations to these reference flows are also invariant under the action of the same group. The similarity variables found with this group are used to determine the decay rates of the velocities and thermodynamic variables in the self-similar flows, and to reduce the governing partial differential equations to a set of ordinary differential equations. The ODE's are solved analytically and numerically for a Taylor vortex reference flow, and numerically for an Oseen vortex reference flow. The solutions are used to examine the dependencies of the temperature, density, entropy, dissipation and radial velocity on the Prandtl number. Also, experimental data on compressible free vortex flow are compared to the analytical results, the evolution of vortices from initial states which are not self-similar is discussed, and the energy transfer in a slightly-compressible vortex is considered.

  9. Efficient Video Similarity Measurement and Search

    SciTech Connect

    Cheung, S-C S

    2002-12-19

    The amount of information on the world wide web has grown enormously since its creation in 1990. Duplication of content is inevitable because there is no central management on the web. Studies have shown that many similar versions of the same text documents can be found throughout the web. This redundancy problem is more severe for multimedia content such as web video sequences, as they are often stored in multiple locations and different formats to facilitate downloading and streaming. Similar versions of the same video can also be found, unknown to content creators, when web users modify and republish original content using video editing tools. Identifying similar content can benefit many web applications and content owners. For example, it will reduce the number of similar answers to a web search and identify inappropriate use of copyright content. In this dissertation, they present a system architecture and corresponding algorithms to efficiently measure, search, and organize similar video sequences found on any large database such as the web.

  10. Image fusion using bi-directional similarity

    NASA Astrophysics Data System (ADS)

    Bai, Chunshan; Luo, Xiaoyan

    2015-05-01

    Infrared images are widely used in the practical applications to capture abundant information. However, it is still challenging to enhance the infrared image by the visual image. In this paper, we propose an effective method using bidirectional similarity. In the proposed method, we aim to find an optimal solution from many feasible solutions without introducing intermediate image. We employ some priori constraints to meet the requirements of image fusion which can be detailed to preserve both good characteristics in the infrared image and spatial information in the visual image. In the iterative step, we use the matrix with the square of the difference between images to integrate the image holding most information. We call this matrix the bidirectional similarity distance. By the bidirectional similarity distance, we can get the transitive images. Then, we fuse the images according to the weight. Experimental results show that, compared to the traditional image fusion algorithm, fusion images from bidirectional similarity fusion algorithm have greatly improved in the subjective vision, entropy, structural similarity index measurement. We believe that the proposed scheme can have a wide applications.

  11. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  12. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  13. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use. PMID:23341318

  14. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  15. A bootstrap method for assessing classification accuracy and confidence for agricultural land use mapping in Canada

    NASA Astrophysics Data System (ADS)

    Champagne, Catherine; McNairn, Heather; Daneshfar, Bahram; Shang, Jiali

    2014-06-01

    Land cover and land use classifications from remote sensing are increasingly becoming institutionalized framework data sets for monitoring environmental change. As such, the need for robust statements of classification accuracy is critical. This paper describes a method to estimate confidence in classification model accuracy using a bootstrap approach. Using this method, it was found that classification accuracy and confidence, while closely related, can be used in complementary ways to provide additional information on map accuracy and define groups of classes and to inform the future reference sampling strategies. Overall classification accuracy increases with an increase in the number of fields surveyed, where the width of classification confidence bounds decreases. Individual class accuracies and confidence were non-linearly related to the number of fields surveyed. Results indicate that some classes can be estimated accurately and confidently with fewer numbers of samples, whereas others require larger reference data sets to achieve satisfactory results. This approach is an improvement over other approaches for estimating class accuracy and confidence as it uses repetitive sampling to produce a more realistic estimate of the range in classification accuracy and confidence that can be obtained with different reference data inputs.

  16. Humans Process Dog and Human Facial Affect in Similar Ways

    PubMed Central

    Schirmer, Annett; Seow, Cui Shan; Penney, Trevor B.

    2013-01-01

    Humans share aspects of their facial affect with other species such as dogs. Here we asked whether untrained human observers with and without dog experience are sensitive to these aspects and recognize dog affect with better-than-chance accuracy. Additionally, we explored similarities in the way observers process dog and human expressions. The stimulus material comprised naturalistic facial expressions of pet dogs and human infants obtained through positive (i.e., play) and negative (i.e., social isolation) provocation. Affect recognition was assessed explicitly in a rating task using full face images and images cropped to reveal the eye region only. Additionally, affect recognition was assessed implicitly in a lexical decision task using full faces as primes and emotional words and pseudowords as targets. We found that untrained human observers rated full face dog expressions from the positive and negative condition more accurately than would be expected by chance. Although dog experience was unnecessary for this effect, it significantly facilitated performance. Additionally, we observed a range of similarities between human and dog face processing. First, the facial expressions of both species facilitated lexical decisions to affectively congruous target words suggesting that their processing was equally automatic. Second, both dog and human negative expressions were recognized from both full and cropped faces. Third, female observers were more sensitive to affective information than were male observers and this difference was comparable for dog and human expressions. Together, these results extend existing work on cross-species similarities in facial emotions and provide evidence that these similarities are naturally exploited when humans interact with dogs. PMID:24023954

  17. Achieving Excellence in Urban Schools: Pitfalls, Pratfalls, and Evolving Opportunities

    ERIC Educational Resources Information Center

    Taylor, Jerome

    2005-01-01

    If effects of education reforms in the future are similar to effects of education reforms in the past, it may take between 65 and 780 years to close racial achievement gaps in math and science (Taylor, in preparation). From a review of the literature, I found that educational reforms, directed toward eliminating achievement differences associated…

  18. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m‑2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  19. Similarity in seismogeodynamics on different scales

    NASA Astrophysics Data System (ADS)

    Ruzhich, V. V.; Psakhie, S. G.; Levina, E. A.; Dimaki, A. V.; Astafurov, S. V.; Shilko, E. V.

    2015-10-01

    Long-term research in the preparation of earthquakes of different energies with M = 3.5-7.9 within the Baikal rift zones shows that they are similar to each other and to microquakes with E = 1-103 J initiated on tectonic fault fragments in natural experiments. Moreover, detailed studies of slickensides of dimensions 1-103 m2 in tectonic faults also demonstrate their physicomechanical similarity to each other and to nano- and microscale contact patches of different materials in laboratory experiments. The research results confirm the conclusion that there exists a similarity in the laws of contact interaction of different solids, including their stick-to-dynamic slip transition, from nanoscopic to geodynamic scales.

  20. Percolation in Self-Similar Networks

    NASA Astrophysics Data System (ADS)

    Serrano, M. Ángeles; Krioukov, Dmitri; Boguñá, Marián

    2011-01-01

    We provide a simple proof that graphs in a general class of self-similar networks have zero percolation threshold. The considered self-similar networks include random scale-free graphs with given expected node degrees and zero clustering, scale-free graphs with finite clustering and metric structure, growing scale-free networks, and many real networks. The proof and the derivation of the giant component size do not require the assumption that networks are treelike. Our results rely only on the observation that self-similar networks possess a hierarchy of nested subgraphs whose average degree grows with their depth in the hierarchy. We conjecture that this property is pivotal for percolation in networks.

  1. Similar biotherapeutic products: overview and reflections.

    PubMed

    Desanvicente-Celis, Zayrho; Gomez-Lopez, Arley; Anaya, Juan-Manuel

    2012-12-01

    Biotherapeutic products (BPs) have revolutionized medicine, changing the way we treat several pathologies such as autoimmune diseases and cancer, among others. Herein, we present an overview of similar BPs (SBPs), also called biosimilars, including the manufacturing process and regulatory aspects involved. The objective of developing an SBP is to manufacture a molecule that is highly similar to a reference BP by conducting a comparability exercise (CE) that can demonstrate similar safety and efficacy. This CE consists of quality, as well as nonclinical and clinical evaluation. A case-by-case analysis approach guided by scientific and objective standards must be the foundation for the SBP approval process. The establishment of a balance between a comprehensive CE for SBPs and their reference BPs, and the design of costeffective strategies to provide better access to BPs, should be the key goal for national regulatory authorities. PMID:23240752

  2. Similarity issues of confluence of fuzzy relations

    NASA Astrophysics Data System (ADS)

    Kuhr, Tomas; Vychodil, Vilem

    2012-04-01

    We study similarity issues of confluence and related properties of fuzzy relations. The ordinary notions of divergence, convergence, convertibility, and confluence are essential properties of relations which appear in abstract rewriting systems. In a graded setting, we can introduce analogous notions related to the idea of approximate rewriting. This paper is a continuation of our previous paper (Belohlavek et al. 2010), where we have introduced such notions using residuated structures of truth degrees, leaving the ordinary notions a particular case when the underlying structure of truth degrees is the two-valued Boolean algebra. In this paper, we focus on similarity issues of confluence and present formulas which provide approximations of degrees of confluence and related properties based on similarity of the original fuzzy relations.

  3. Unmet Promise: Raising Minority Achievement. The Achievement Gap.

    ERIC Educational Resources Information Center

    Johnston, Robert C.; Viadero, Debra

    2000-01-01

    This first in a four-part series on why academic achievement gaps persist discusses how to raise minority achievement. It explains how earlier progress in closing the gap has stalled, while at the same time, the greater diversity of student populations and the rapid growth of the Hispanic population and of other ethnic groups have reshaped the…

  4. To Achieve or Not to Achieve: The Question of Women.

    ERIC Educational Resources Information Center

    Gilmore, Beatrice

    Questionnaire and projective data from 323 women aged 18 to 50 were analyzed in order to study the relationships of need achievement and motive to avoid success to age, sex role ideology, and stage in the family cycle. Family background and educational variables were also considered. Level of need achievement was found to be significantly related…

  5. Mathematics Achievement in High- and Low-Achieving Secondary Schools

    ERIC Educational Resources Information Center

    Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza

    2015-01-01

    This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…

  6. Accuracies of Incoming Radiation: Calibrations of Total Solar Irradiance Instruments

    NASA Astrophysics Data System (ADS)

    Kopp, G.; Harber, D.; Heuerman, K.

    2009-04-01

    All of the energy tracked by the GEWEX Radiative Flux Assessment and the driving energy for Earth climate is incident at the top of the Earth's atmosphere as solar radiation. The total solar irradiance (TSI) has been monitored continually for over 30 years from space. Continuity of these measurements has enabled the creation of composite time series from which the radiative forcing inputs to climate models are derived and solar forcing sensitivities are determined. None of the ten spaceborne TSI instruments contributing to the solar climate data record have been calibrated or validated end-to-end for irradiance accuracy under flight-like conditions, and calibration inaccuracies contribute to seemingly large offsets between the TSI values reported by each instrument. The newest of the flight TSI instruments, the SOlar Radiation and Climate Experiment (SORCE) Total Irradiance Monitor (TIM), measures lower solar irradiance than prior instruments. I will review the accuracies of flight TSI instruments, discuss possible causes for the offsets between them, and describe a recently built calibration facility to improve the accuracies of future TSI instruments. The TSI Radiometer Facility (TRF) enables end-to-end comparisons of TSI instruments to a NIST-calibrated cryogenic radiometer. For the first time, TSI instruments can be validated directly against a cryogenic radiometer under flight-like conditions for measuring irradiance (rather than merely optical power) at solar power levels while under vacuum. The TRF not only validates TSI instrument accuracy, but also can help diagnose the causes of offsets between different instruments. This facility recently validated the accuracy of the TIM to be launched this year on NASA's Glory mission, establishing a baseline that can link the Glory/TIM to future TSI instruments via this ground-based comparison. Similar tests on the TRF with a ground-based SORCE/TIM support the lower TSI values measured by the SORCE flight unit. These

  7. Energy expenditure prediction via a footwear-based physical activity monitor: Accuracy and comparison to other devices

    NASA Astrophysics Data System (ADS)

    Dannecker, Kathryn

    2011-12-01

    Accurately estimating free-living energy expenditure (EE) is important for monitoring or altering energy balance and quantifying levels of physical activity. The use of accelerometers to monitor physical activity and estimate physical activity EE is common in both research and consumer settings. Recent advances in physical activity monitors include the ability to identify specific activities (e.g. stand vs. walk) which has resulted in improved EE estimation accuracy. Recently, a multi-sensor footwear-based physical activity monitor that is capable of achieving 98% activity identification accuracy has been developed. However, no study has compared the EE estimation accuracy for this monitor and compared this accuracy to other similar devices. Purpose . To determine the accuracy of physical activity EE estimation of a footwear-based physical activity monitor that uses an embedded accelerometer and insole pressure sensors and to compare this accuracy against a variety of research and consumer physical activity monitors. Methods. Nineteen adults (10 male, 9 female), mass: 75.14 (17.1) kg, BMI: 25.07(4.6) kg/m2 (mean (SD)), completed a four hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as three physical activity monitoring devices used in research: hip-mounted Actical and Actigraph accelerometers and a multi-accelerometer IDEEA device with sensors secured to the limb and chest. In addition, participants wore two consumer devices: Philips DirectLife and Fitbit. Each individual performed a series of randomly assigned and ordered postures/activities including lying, sitting (quietly and using a computer), standing, walking, stepping, cycling, sweeping, as well as a period of self-selected activities. We developed branched (i.e. activity specific) linear regression models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Results. The shoe

  8. Quantifying the similarities within fold space.

    PubMed

    Harrison, Andrew; Pearl, Frances; Mott, Richard; Thornton, Janet; Orengo, Christine

    2002-11-01

    We have used GRATH, a graph-based structure comparison algorithm, to map the similarities between the different folds observed in the CATH domain structure database. Statistical analysis of the distributions of the fold similarities has allowed us to assess the significance for any similarity. Therefore we have examined whether it is best to represent folds as discrete entities or whether, in fact, a more accurate model would be a continuum wherein folds overlap via common motifs. To do this we have introduced a new statistical measure of fold similarity, termed gregariousness. For a particular fold, gregariousness measures how many other folds have a significant structural overlap with that fold, typically comprising 40% or more of the larger structure. Gregarious folds often contain commonly occurring super-secondary structural motifs, such as beta-meanders, greek keys, alpha-beta plait motifs or alpha-hairpins, which are matching similar motifs in other folds. Apart from one example, all the most gregarious folds matching 20% or more of the other folds in the database, are alpha-beta proteins. They also occur in highly populated architectural regions of fold space, adopting sandwich-like arrangements containing two or more layers of alpha-helices and beta-strands.Domains that exhibit a low gregariousness, are those that have very distinctive folds, with few common motifs or motifs that are packed in unusual arrangements. Most of the superhelices exhibit low gregariousness despite containing some commonly occurring super-secondary structural motifs. In these folds, these common motifs are combined in an unusual way and represent a small proportion of the fold (<10%). Our results suggest that fold space may be considered as continuous for some architectural arrangements (e.g. alpha-beta sandwiches), in that super-secondary motifs can be used to link neighbouring fold groups. However, in other regions of fold space much more discrete topologies are observed with

  9. Quantifying the similarity of seismic polarizations

    NASA Astrophysics Data System (ADS)

    Jones, Joshua P.; Eaton, David W.; Caffagni, Enrico

    2016-02-01

    Assessing the similarities of seismic attributes can help identify tremor, low signal-to-noise (S/N) signals and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values or known seismic sources. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in S/N ratio. Measuring polarization similarity allows easy identification of site noise and sensor misalignment and can help identify coherent noise and emergent or low S/N phase arrivals. Dissimilar azimuths during phase arrivals indicate misaligned horizontal components, dissimilar incidence angles during phase arrivals indicate misaligned vertical components and dissimilar linear polarization may indicate a secondary noise source. Using records of the Mw = 8.3 Sea of Okhotsk earthquake, from Canadian National Seismic Network broad-band sensors in British Columbia and Yukon Territory, Canada, and a vertical borehole array at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Time-frequency polarization similarities of borehole data suggest that a coherent noise source may have persisted above 8 Hz several months after peak resource extraction from a `flowback' type hydraulic fracture.

  10. Some more similarities between Peirce and Skinner.

    PubMed

    Moxley, Roy A

    2002-01-01

    C. S. Peirce is noted for pioneering a variety of views, and the case is made here for the similarities and parallels between his views and B. F. Skinner's radical behaviorism. In addition to parallels previously noted, these similarities include an advancement of experimental science, a behavioral psychology, a shift from nominalism to realism, an opposition to positivism, a selectionist account for strengthening behavior, the importance of a community of selves, a recursive approach to method, and the probabilistic nature of truth. Questions are raised as to the extent to which Skinner's radical behaviorism, as distinguished from his S-R positivism, may be seen as an extension of Peirce's pragmatism. PMID:22478387

  11. Some more similarities between Peirce and Skinner

    PubMed Central

    Moxley, Roy A.

    2002-01-01

    C. S. Peirce is noted for pioneering a variety of views, and the case is made here for the similarities and parallels between his views and B. F. Skinner's radical behaviorism. In addition to parallels previously noted, these similarities include an advancement of experimental science, a behavioral psychology, a shift from nominalism to realism, an opposition to positivism, a selectionist account for strengthening behavior, the importance of a community of selves, a recursive approach to method, and the probabilistic nature of truth. Questions are raised as to the extent to which Skinner's radical behaviorism, as distinguished from his S-R positivism, may be seen as an extension of Peirce's pragmatism. PMID:22478387

  12. Accuracy and Efficiency in Fixed-Point Neural ODE Solvers.

    PubMed

    Hopkins, Michael; Furber, Steve

    2015-10-01

    Simulation of neural behavior on digital architectures often requires the solution of ordinary differential equations (ODEs) at each step of the simulation. For some neural models, this is a significant computational burden, so efficiency is important. Accuracy is also relevant because solutions can be sensitive to model parameterization and time step. These issues are emphasized on fixed-point processors like the ARM unit used in the SpiNNaker architecture. Using the Izhikevich neural model as an example, we explore some solution methods, showing how specific techniques can be used to find balanced solutions. We have investigated a number of important and related issues, such as introducing explicit solver reduction (ESR) for merging an explicit ODE solver and autonomous ODE into one algebraic formula, with benefits for both accuracy and speed; a simple, efficient mechanism for cancelling the cumulative lag in state variables caused by threshold crossing between time steps; an exact result for the membrane potential of the Izhikevich model with the other state variable held fixed. Parametric variations of the Izhikevich neuron show both similarities and differences in terms of algorithms and arithmetic types that perform well, making an overall best solution challenging to identify, but we show that particular cases can be improved significantly using the techniques described. Using a 1 ms simulation time step and 32-bit fixed-point arithmetic to promote real-time performance, one of the second-order Runge-Kutta methods looks to be the best compromise; Midpoint for speed or Trapezoid for accuracy. SpiNNaker offers an unusual combination of low energy use and real-time performance, so some compromises on accuracy might be expected. However, with a careful choice of approach, results comparable to those of general-purpose systems should be possible in many realistic cases. PMID:26313605

  13. Decreased interoceptive accuracy following social exclusion.

    PubMed

    Durlik, Caroline; Tsakiris, Manos

    2015-04-01

    The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592

  14. Training in timing improves accuracy in golf.

    PubMed

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  15. Modeling of cw OIL energy performance based on similarity criteria

    NASA Astrophysics Data System (ADS)

    Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.

    2012-01-01

    A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.

  16. Self-similar Isochoric Implosions for Fast Ignition

    NASA Astrophysics Data System (ADS)

    Clark, Daniel

    2005-10-01

    Fast Ignition (FI) exploits the ignition of a dense, uniform fuel assembly by an external energy source to achieve high gain. However, in conventional ICF implosions, the fuel assembles as a dense shell surrounding a low density, high-pressure hotspot. Such configurations are far from optimal for FI. Here, it is shown that a self-similar spherical implosion of the type studied by Guderley [Luftfahrtforschung 19, 302 (1942).] and later Meyer-ter-Vehn & Schalk [Z. Naturforsch. 37a, 955 (1982).] may be employed to implode dense, uniform fuel assemblies with minimal energy wastage in forming a hotspot. The connection to "realistic" (i.e., non-self-similar) implosion schemes using laser or X-ray drive is also investigated.

  17. Affective Processes and Academic Achievement.

    ERIC Educational Resources Information Center

    Feshbach, Norma Deitch; Feshbach, Seymour

    1987-01-01

    Data indicate that for girls, affective dispositional factors (empathy, depressive affectivity, aggression, and self-concept) are intimately linked to cognitive development and academic achievement. (PCB)

  18. Attribution theory in science achievement

    NASA Astrophysics Data System (ADS)

    Craig, Martin

    Recent research reveals consistent lags in American students' science achievement scores. Not only are the scores lower in the United States compared to other developed nations, but even within the United States, too many students are well below science proficiency scores for their grade levels. The current research addresses this problem by examining potential malleable factors that may predict science achievement in twelfth graders using 2009 data from the National Assessment of Educational Progress (NAEP). Principle component factor analysis was conducted to determine the specific items that contribute to each overall factor. A series of multiple regressions were then analyzed and formed the predictive value of each of these factors for science achievement. All significant factors were ultimately examined together (also using multiple regression) to determine the most powerful predictors of science achievement, identifying factors that predict science achievement, the results of which suggested interventions to strengthen students' science achievement scores and encourage persistence in the sciences at the college level and beyond. Although there is a variety of research highlighting how students in the US are falling behind other developing nations in science and math achievement, as yet, little research has addressed ways of intervening to address this gap. The current research is a starting point, seeking to identify malleable factors that contribute to science achievement. More specifically, this research examined the types of attributions that predict science achievement in twelfth grade students.

  19. An automatic and accurate x-ray tube focal spot/grid alignment system for mobile radiography: System description and alignment accuracy

    SciTech Connect

    Gauntt, David M.; Barnes, Gary T.

    2010-12-15

    Purpose: A mobile radiography automatic grid alignment system (AGAS) has been developed by modifying a commercially available mobile unit. The objectives of this article are to describe the modifications and operation and to report on the accuracy with which the focal spot is aligned to the grid and the time required to achieve the alignment. Methods: The modifications include an optical target arm attached to the grid tunnel, a video camera attached to the collimator, a motion control system with six degrees of freedom to position the collimator and x-ray tube, and a computer to control the system. The video camera and computer determine the grid position, and then the motion control system drives the x-ray focal spot to the center of the grid focal axis. The accuracy of the alignment of the focal spot with the grid and the time required to achieve alignment were measured both in laboratory tests and in clinical use. Results: For a typical exam, the modified unit automatically aligns the focal spot with the grid in less than 10 s, with an accuracy of better than 4 mm. The results of the speed and accuracy tests in clinical use were similar to the results in laboratory tests. Comparison patient chest images are presented--one obtained with a standard mobile radiographic unit without a grid and the other obtained with the modified unit and a 15:1 grid. The 15:1 grid images demonstrate a marked improvement in image quality compared to the nongrid images with no increase in patient dose. Conclusions: The mobile radiography AGAS produces images of significantly improved quality compared to nongrid images with alignment times of less than 10 s and no increase in patient dose.

  20. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  1. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  2. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  3. Charge Detection Mass Spectrometry with Almost Perfect Charge Accuracy.

    PubMed

    Keifer, David Z; Shinholt, Deven L; Jarrold, Martin F

    2015-10-20

    Charge detection mass spectrometry (CDMS) is a single-particle technique where the masses of individual ions are determined from simultaneous measurement of each ion's mass-to-charge ratio (m/z) and charge. CDMS has many desirable features: it has no upper mass limit, no mass discrimination, and it can analyze complex mixtures. However, the charge is measured directly, and the poor accuracy of the charge measurement has severely limited the mass resolution achievable with CDMS. Since the charge is quantized, it needs to be measured with sufficient accuracy to assign each ion to its correct charge state. This goal has now been largely achieved. By reducing the pressure to extend the trapping time and by implementing a novel analysis method that improves the signal-to-noise ratio and compensates for imperfections in the charge measurement, the uncertainty has been reduced to less than 0.20 e rmsd (root-mean-square deviation). With this unprecedented precision peaks due to different charge states are resolved in the charge spectrum. Further improvement can be achieved by quantizing the charge (rounding the measured charge to the nearest integer) and culling ions with measured charges midway between the integral values. After ions with charges more than one standard deviation from the mean are culled, the fraction of ions assigned to the wrong charge state is estimated to be 6.4 × 10(-5) (i.e., less than 1 in 15 000). Since almost all remaining ions are assigned to their correct charge state, the uncertainty in the mass is now almost entirely limited by the uncertainty in the m/z measurement. PMID:26418830

  4. Estimating the self-similar exponent of broad-sense self-similar processes

    NASA Astrophysics Data System (ADS)

    Zheng, Jing; Zhang, Guijun; Tong, Changqing

    2016-02-01

    In this paper, a new algorithm about the self-similar exponent of self-similar processes is introduced which is used to explore long memory in financial time series. This method can work for more general broad-sense self-similar processes. We prove that this algorithm performs much better than the classical methods.

  5. Fuzzy similarity index for discrimination of EEG signals.

    PubMed

    Ubeyli, Elif Derya

    2006-01-01

    In this study, a new approach based on the computation of fuzzy similarity index was presented for discrimination of electroencephalogram (EEG) signals. The EEG, a highly complex signal, is one of the most common sources of information used to study brain function and neurological disorders. The analyzed EEG signals were consisted of five sets (set A-healthy volunteer, eyes open; set B-healthy volunteer, eyes closed; set C-seizure-free intervals of five patients from hippocampal formation of opposite hemisphere; set D-seizure-free intervals of five patients from epileptogenic zone; set E-epileptic seizure segments). The EEG signals were considered as chaotic signals and this consideration was tested successfully by the computation of Lyapunov exponents. The computed Lyapunov exponents were used to represent the EEG signals. The aim of the study is discriminating the EEG signals by the combination of Lyapunov exponents and fuzzy similarity index. Toward achieving this aim, fuzzy sets were obtained from the feature sets (Lyapunov exponents) of the signals under study. The results demonstrated that the similarity between the fuzzy sets of the studied signals indicated the variabilities in the EEG signals. Thus, the fuzzy similarity index could discriminate the healthy EEG segments (sets A and B) and the other three types of segments (sets C, D, and E) recorded from epileptic patients. PMID:17945895

  6. Identification and sorting of regular textures according to their similarity

    NASA Astrophysics Data System (ADS)

    Hernández Mesa, Pilar; Anastasiadis, Johannes; Puente León, Fernando

    2015-05-01

    Regardless whether mosaics, material surfaces or skin surfaces are inspected their texture plays an important role. Texture is a property which is hard to describe using words but it can easily be described in pictures. Furthermore, a huge amount of digital images containing a visual description of textures already exists. However, this information becomes useless if there are no appropriate methods to browse the data. In addition, depending on the given task some properties like scale, rotation or intensity invariance are desired. In this paper we propose to analyze texture images according to their characteristic pattern. First a classification approach is proposed to separate regular from non-regular textures. The second stage will focus on regular textures suggesting a method to sort them according to their similarity. Different features will be extracted from the texture in order to describe its scale, orientation, texel and the texel's relative position. Depending on the desired invariance of the visual characteristics (like the texture's scale or the texel's form invariance) the comparison of the features between images will be weighted and combined to define the degree of similarity between them. Tuning the weighting parameters allows this search algorithm to be easily adapted to the requirements of the desired task. Not only the total invariance of desired parameters can be adjusted, the weighting of the parameters may also be modified to adapt to an application-specific type of similarity. This search method has been evaluated using different textures and similarity criteria achieving very promising results.

  7. Predicting spatial similarity of freshwater fish biodiversity

    PubMed Central

    Azaele, Sandro; Muneepeerakul, Rachata; Maritan, Amos; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2009-01-01

    A major issue in modern ecology is to understand how ecological complexity at broad scales is regulated by mechanisms operating at the organismic level. What specific underlying processes are essential for a macroecological pattern to emerge? Here, we analyze the analytical predictions of a general model suitable for describing the spatial biodiversity similarity in river ecosystems, and benchmark them against the empirical occurrence data of freshwater fish species collected in the Mississippi–Missouri river system. Encapsulating immigration, emigration, and stochastic noise, and without resorting to species abundance data, the model is able to reproduce the observed probability distribution of the Jaccard similarity index at any given distance. In addition to providing an excellent agreement with the empirical data, this approach accounts for heterogeneities of different subbasins, suggesting a strong dependence of biodiversity similarity on their respective climates. Strikingly, the model can also predict the actual probability distribution of the Jaccard similarity index for any distance when considering just a relatively small sample. The proposed framework supports the notion that simplified macroecological models are capable of predicting fundamental patterns—a theme at the heart of modern community ecology. PMID:19359481

  8. Explaining Sibling Similarities: Perceptions of Sibling Influences

    ERIC Educational Resources Information Center

    Whiteman, Shawn D.; McHale, Susan M.; Crouter, Ann C.

    2007-01-01

    This study examined older siblings' influence on their younger brothers and sisters by assessing the connections between youth's perceptions of sibling influence and sibling similarities in four domains: Risky behavior, peer competence, sports interests, and art interests. Participants included two adolescent-age siblings (firstborn age M=17.34;…

  9. Similarity of Experience and Empathy in Preschoolers.

    ERIC Educational Resources Information Center

    Barnett, Mark A.

    The present study examined the role of similarity of experience in young children's affective reactions to others. Some preschoolers played one of two games (Puzzle Board or Buckets) and were informed that they had either failed or succeeded; others merely observed the games being played and were given no evaluative feedback. Subsequently, each…

  10. Fuzzy similarity measures for ultrasound tissue characterization

    NASA Astrophysics Data System (ADS)

    Emara, Salem M.; Badawi, Ahmed M.; Youssef, Abou-Bakr M.

    1995-03-01

    Computerized ultrasound tissue characterization has become an objective means for diagnosis of diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver from a normal one, by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases is rather confusing and highly dependent upon the sonographer's experience. The need for computerized tissue characterization is thus justified to quantitatively assist the sonographer for accurate differentiation and to minimize the degree of risk from erroneous interpretation. In this paper we used the fuzzy similarity measure as an approximate reasoning technique to find the maximum degree of matching between an unknown case defined by a feature vector and a family of prototypes (knowledge base). The feature vector used for the matching process contains 8 quantitative parameters (textural, acoustical, and speckle parameters) extracted from the ultrasound image. The steps done to match an unknown case with the family of prototypes (cirr, fatty, normal) are: Choosing the membership functions for each parameter, then obtaining the fuzzification matrix for the unknown case and the family of prototypes, then by the linguistic evaluation of two fuzzy quantities we obtain the similarity matrix, then by a simple aggregation method and the fuzzy integrals we obtain the degree of similarity. Finally, we find that the similarity measure results are comparable to the neural network classification techniques and it can be used in medical diagnosis to determine the pathology of the liver and to monitor the extent of the disease.

  11. Using Similarity to Find Length and Area.

    ERIC Educational Resources Information Center

    Sandefur, James T.

    1994-01-01

    Shows a way in which algebra and geometry can be used together to find the lengths and areas of spirals. This method develops better understanding of shapes, similarity, and mathematical connections in students. Discusses spirals embedded in triangles and squares, the Pythagorean theorem, and the area of regular polygons. (MKR)

  12. 7 CFR 51.1997 - Similar type.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Similar type. 51.1997 Section 51.1997 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF...

  13. Similarity of Science Textbooks: A Content Analysis

    ERIC Educational Resources Information Center

    Yost, Michael

    1973-01-01

    Studied the similarity of the astronomy portion in five science textbooks at the fourth through sixth grade levels by comparing students' responses to text authors' requirements. Concluded that the texts had more in common across grade levels than within grade levels. (CC)

  14. Measuring the Perceptual Similarity of Pitch Contours.

    ERIC Educational Resources Information Center

    Hermes, Dik J.

    1998-01-01

    This study investigated the effectiveness of four different methods for measuring the similarity of pitch contours. The correlation coefficient between two normalized contours was the best method; however, if pitch range is important, the mean distance and the root-mean-square distance should be considered first in automatic training in…

  15. Classification of wheat: Badhwar profile similarity technique

    NASA Technical Reports Server (NTRS)

    Austin, W. W.

    1980-01-01

    The Badwar profile similarity classification technique used successfully for classification of corn was applied to spring wheat classifications. The software programs and the procedures used to generate full-scene classifications are presented, and numerical results of the acreage estimations are given.

  16. Great Apes' Capacities to Recognize Relational Similarity

    ERIC Educational Resources Information Center

    Haun, Daniel B. M.; Call, Josep

    2009-01-01

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding…

  17. Measuring structural similarity in large online networks.

    PubMed

    Shi, Yongren; Macy, Michael

    2016-09-01

    Structural similarity based on bipartite graphs can be used to detect meaningful communities, but the networks have been tiny compared to massive online networks. Scalability is important in applications involving tens of millions of individuals with highly skewed degree distributions. Simulation analysis holding underlying similarity constant shows that two widely used measures - Jaccard index and cosine similarity - are biased by the distribution of out-degree in web-scale networks. However, an alternative measure, the Standardized Co-incident Ratio (SCR), is unbiased. We apply SCR to members of Congress, musical artists, and professional sports teams to show how massive co-following on Twitter can be used to map meaningful affiliations among cultural entities, even in the absence of direct connections to one another. Our results show how structural similarity can be used to map cultural alignments and demonstrate the potential usefulness of social media data in the study of culture, politics, and organizations across the social and behavioral sciences. PMID:27480374

  18. The Case of the Similar Trees.

    ERIC Educational Resources Information Center

    Meyer, Rochelle Wilson

    1982-01-01

    A possible logical flaw based on similar triangles is discussed with the Sherlock Holmes mystery, "The Muskgrave Ritual." The possible flaw has to do with the need for two trees to have equal growth rates over a 250-year period in order for the solution presented to work. (MP)

  19. Black hole physics: More similar than knot

    NASA Astrophysics Data System (ADS)

    Gómez, José L.

    2016-08-01

    The detection of a discrete knot of particle emission from the active galaxy M81* reveals that black hole accretion is self-similar with regard to mass, producing the same knotty jets irrespective of black hole mass and accretion rate.

  20. Theoretical Accuracy of Along-Track Displacement Measurements from Multiple-Aperture Interferometry (MAI)

    PubMed Central

    Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei

    2014-01-01

    The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408