Geomagnetic matching navigation algorithm based on robust estimation
NASA Astrophysics Data System (ADS)
Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan
2017-08-01
The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.
ERIC Educational Resources Information Center
Keiffer, Elizabeth Ann
2011-01-01
A differential item functioning (DIF) simulation study was conducted to explore the type and level of impact that contamination had on type I error and power rates in DIF analyses when the suspect item favored the same or opposite group as the DIF items in the matching subtest. Type I error and power rates were displayed separately for the…
ERIC Educational Resources Information Center
Monahan, Patrick O.; Ankenmann, Robert D.
2010-01-01
When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsumi Marukawa; Kazuki Nakashima; Masashi Koga
1994-12-31
This paper presents a paper form processing system with an error correcting function for reading handwritten kanji strings. In the paper form processing system, names and addresses are important key data, and especially this paper takes up an error correcting method for name and address recognition. The method automatically corrects errors of the kanji OCR (Optical Character Reader) with the help of word dictionaries and other knowledge. Moreover, it allows names and addresses to be written in any style. The method consists of word matching {open_quotes}furigana{close_quotes} verification for name strings, and address approval for address strings. For word matching, kanjimore » name candidates are extracted by automaton-type word matching. In {open_quotes}furigana{close_quotes} verification, kana candidate characters recognized by the kana OCR are compared with kana`s searched from the name dictionary based on kanji name candidates, given by the word matching. The correct name is selected from the results of word matching and furigana verification. Also, the address approval efficiently searches for the right address based on a bottom-up procedure which follows hierarchical relations from a lower placename to a upper one by using the positional condition among the placenames. We ascertained that the error correcting method substantially improves the recognition rate and processing speed in experiments on 5,032 forms.« less
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
The Chandra Source Catalog 2.0: Early Cross-matches
NASA Astrophysics Data System (ADS)
Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
Cross-matching the Chandra Source Catalog (CSC) with other catalogs presents considerable challenges, since the Point Spread Function (PSF) of the Chandra X-ray Observatory varies significantly over the field of view. For the second release of the CSC (CSC2) we have been developing a cross-match tool that is based on the Bayesian algorithms by Budavari, Heinis, and Szalay (ApJ 679, 301 and 705, 739), making use of the error ellipses for the derived positions of the sources.However, calculating match probabilities only on the basis of error ellipses breaks down when the PSFs are significantly different. Not only can bonafide matches easily be missed, but the scene is also muddied by ambiguous multiple matches. These are issues that are not commonly addressed in cross-match tools. We have applied a satisfactory modification to the algorithm that, although not perfect, ameliorates the problems for the vast majority of such cases.We will present some early cross-matches of the CSC2 catalog with obvious candidate catalogs and report on the determination of the absolute astrometric error of the CSC2 based on such cross-matches.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
ERIC Educational Resources Information Center
Preston, Jonathan L.; Felsenfeld, Susan; Frost, Stephen J.; Mencl, W. Einar; Fulbright, Robert K.; Grigorenko, Elena L.; Landi, Nicole; Seki, Ayumi; Pugh, Kenneth R.
2012-01-01
Purpose: To examine neural response to spoken and printed language in children with speech sound errors (SSE). Method: Functional magnetic resonance imaging was used to compare processing of auditorily and visually presented words and pseudowords in 17 children with SSE, ages 8;6[years;months] through 10;10, with 17 matched controls. Results: When…
Kuwabara, Masaru; Mansouri, Farshad A.; Buckley, Mark J.
2014-01-01
Monkeys were trained to select one of three targets by matching in color or matching in shape to a sample. Because the matching rule frequently changed and there were no cues for the currently relevant rule, monkeys had to maintain the relevant rule in working memory to select the correct target. We found that monkeys' error commission was not limited to the period after the rule change and occasionally occurred even after several consecutive correct trials, indicating that the task was cognitively demanding. In trials immediately after such error trials, monkeys' speed of selecting targets was slower. Additionally, in trials following consecutive correct trials, the monkeys' target selections for erroneous responses were slower than those for correct responses. We further found evidence for the involvement of the cortex in the anterior cingulate sulcus (ACCs) in these error-related behavioral modulations. First, ACCs cell activity differed between after-error and after-correct trials. In another group of ACCs cells, the activity differed depending on whether the monkeys were making a correct or erroneous decision in target selection. Second, bilateral ACCs lesions significantly abolished the response slowing both in after-error trials and in error trials. The error likelihood in after-error trials could be inferred by the error feedback in the previous trial, whereas the likelihood of erroneous responses after consecutive correct trials could be monitored only internally. These results suggest that ACCs represent both context-dependent and internally detected error likelihoods and promote modes of response selections in situations that involve these two types of error likelihood. PMID:24872558
Tsay, Anthony J; Giummarra, Melita J
2016-07-01
Awareness of limb position is derived primarily from muscle spindles and higher-order body representations. Although chronic pain appears to be associated with motor and proprioceptive disturbances, it is not clear if this is due to disturbances in position sense, muscle spindle function, or central representations of the body. This study examined position sense errors, as an indicator of spindle function, in participants with unilateral chronic limb pain. The sample included 15 individuals with upper limb pain, 15 with lower limb pain, and 15 sex- and age-matched pain-free control participants. A 2-limb forearm matching task in blindfolded participants, and a single-limb pointer task, with the reference limb hidden from view, was used to assess forearm position sense. Position sense was determined after muscle contraction or stretch, intended to induce a high or low spindle activity in the painful and nonpainful limbs, respectively. Unilateral upper and lower limb chronic pain groups produced position errors comparable with healthy control participants for position matching and pointer tasks. The results indicate that the painful and nonpainful limb are involved in limb-matching. Lateralized pain, whether in the arm or leg, does not influence forearm position sense. Painful and nonpainful limbs are involved in bilateral limb-matching. Muscle spindle function appears to be preserved in the presence of chronic pain. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
A direct approach to the design of linear multivariable systems
NASA Technical Reports Server (NTRS)
Agrawal, B. L.
1974-01-01
Design of multivariable systems is considered and design procedures are formulated in the light of the most recent work on model matching. The word model matching is used exclusively to mean matching the input-output behavior of two systems. The term is used in the frequency domain to indicate the comparison of two transfer matrices containing transfer functions as elements. Design methods where non-interaction is not used as a criteria were studied. Two design methods are considered. The first method of design is based solely upon the specification of generalized error coefficients for each individual transfer function of the overall system transfer matrix. The second design method is called the pole fixing method because all the system poles are fixed at preassigned positions. The zeros of terms either above or below the diagonal are partially fixed via steady state error coefficients. The advantages and disadvantages of each method are discussed and an example is worked to demonstrate their uses. The special cases of triangular decoupling and minimum constraints are discussed.
Type I Error Inflation for Detecting DIF in the Presence of Impact
ERIC Educational Resources Information Center
DeMars, Christine E.
2010-01-01
In this brief explication, two challenges for using differential item functioning (DIF) measures when there are large group differences in true proficiency are illustrated. Each of these difficulties may lead to inflated Type I error rates, for very different reasons. One problem is that groups matched on observed score are not necessarily well…
Concentrating on beauty: sexual selection and sociospatial memory.
Becker, D Vaughn; Kenrick, Douglas T; Guerin, Stephen; Maner, Jon K
2005-12-01
In three experiments, location memory for faces was examined using a computer version of the matching game Concentration. Findings suggested that physical attractiveness led to more efficient matching for female faces but not for male faces. Study 3 revealed this interaction despite allowing participants to initially see, attend to, and match the attractive male faces in the first few turns. Analysis of matching errors suggested that, compared to other targets, attractive women were less confusable with one another. Results are discussed in terms of the different functions that attractiveness serves for men and women.
Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment
ERIC Educational Resources Information Center
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…
ERIC Educational Resources Information Center
Carlstedt, Roland A.
2004-01-01
A line-bisecting test was administered to 250 highly skilled right-handed athletes and a control group of 60 right-handed age matched non-athletes. Results revealed that athletes made overwhelmingly more rightward errors than non-athletes, who predominantly bisected lines to the left of the veridical center. These findings were interpreted in the…
Designing an algorithm to preserve privacy for medical record linkage with error-prone data.
Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen
2014-01-20
Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.
Molecular Dynamics Information Improves cis-Peptide-Based Function Annotation of Proteins.
Das, Sreetama; Bhadra, Pratiti; Ramakumar, Suryanarayanarao; Pal, Debnath
2017-08-04
cis-Peptide bonds, whose occurrence in proteins is rare but evolutionarily conserved, are implicated to play an important role in protein function. This has led to their previous use in a homology-independent, fragment-match-based protein function annotation method. However, proteins are not static molecules; dynamics is integral to their activity. This is nicely epitomized by the geometric isomerization of cis-peptide to trans form for molecular activity. Hence we have incorporated both static (cis-peptide) and dynamics information to improve the prediction of protein molecular function. Our results show that cis-peptide information alone cannot detect functional matches in cases where cis-trans isomerization exists but 3D coordinates have been obtained for only the trans isomer or when the cis-peptide bond is incorrectly assigned as trans. On the contrary, use of dynamics information alone includes false-positive matches for cases where fragments with similar secondary structure show similar dynamics, but the proteins do not share a common function. Combining the two methods reduces errors while detecting the true matches, thereby enhancing the utility of our method in function annotation. A combined approach, therefore, opens up new avenues of improving existing automated function annotation methodologies.
Role of color memory in successive color constancy.
Ling, Yazhu; Hurlbert, Anya
2008-06-01
We investigate color constancy for real 2D paper samples using a successive matching paradigm in which the observer memorizes a reference surface color under neutral illumination and after a temporal interval selects a matching test surface under the same or different illumination. We find significant effects of the illumination, reference surface, and their interaction on the matching error. We characterize the matching error in the absence of illumination change as the "pure color memory shift" and introduce a new index for successive color constancy that compares this shift against the matching error under changing illumination. The index also incorporates the vector direction of the matching errors in chromaticity space, unlike the traditional constancy index. With this index, we find that color constancy is nearly perfect.
Correcting pervasive errors in RNA crystallography through enumerative structure prediction.
Chou, Fang-Chieh; Sripakdeevong, Parin; Dibrov, Sergey M; Hermann, Thomas; Das, Rhiju
2013-01-01
Three-dimensional RNA models fitted into crystallographic density maps exhibit pervasive conformational ambiguities, geometric errors and steric clashes. To address these problems, we present enumerative real-space refinement assisted by electron density under Rosetta (ERRASER), coupled to Python-based hierarchical environment for integrated 'xtallography' (PHENIX) diffraction-based refinement. On 24 data sets, ERRASER automatically corrects the majority of MolProbity-assessed errors, improves the average R(free) factor, resolves functionally important discrepancies in noncanonical structure and refines low-resolution models to better match higher-resolution models.
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
Tian, Zengshan; Xu, Kunjie; Yu, Xiang
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349
Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo
2014-01-01
This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.
Multiple levels of bilingual language control: evidence from language intrusions in reading aloud.
Gollan, Tamar H; Schotter, Elizabeth R; Gomez, Joanne; Murillo, Mayra; Rayner, Keith
2014-02-01
Bilinguals rarely produce words in an unintended language. However, we induced such intrusion errors (e.g., saying el instead of he) in 32 Spanish-English bilinguals who read aloud single-language (English or Spanish) and mixed-language (haphazard mix of English and Spanish) paragraphs with English or Spanish word order. These bilinguals produced language intrusions almost exclusively in mixed-language paragraphs, and most often when attempting to produce dominant-language targets (accent-only errors also exhibited reversed language-dominance effects). Most intrusion errors occurred for function words, especially when they were not from the language that determined the word order in the paragraph. Eye movements showed that fixating a word in the nontarget language increased intrusion errors only for function words. Together, these results imply multiple mechanisms of language control, including (a) inhibition of the dominant language at both lexical and sublexical processing levels, (b) special retrieval mechanisms for function words in mixed-language utterances, and (c) attentional monitoring of the target word for its match with the intended language.
Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media
Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.
2009-01-01
Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.
Pranata, Adrian; Perraton, Luke; El-Ansary, Doa; Clark, Ross; Fortin, Karine; Dettmann, Tim; Brandham, Robert; Bryant, Adam
2017-07-01
The ability to control lumbar extensor force output is necessary for daily activities. However, it is unknown whether this ability is impaired in chronic low back pain patients. Similarly, it is unknown whether lumbar extensor force control is related to the disability levels of chronic low back pain patients. Thirty-three chronic low back pain and 20 healthy people performed lumbar extension force-matching task where they increased and decreased their force output to match a variable target force within 20%-50% maximal voluntary isometric contraction. Force control was quantified as the root-mean-square-error between participants' force output and target force across the entire, during the increasing and decreasing portions of the force curve. Within- and between-group differences in force-matching error and the relationship between back pain group's force-matching results and their Oswestry Disability Index scores were assessed using ANCOVA and linear regression respectively. Back pain group demonstrated more overall force-matching error (mean difference=1.60 [0.78, 2.43], P<0.01) and more force-matching error while increasing force output (mean difference=2.19 [1.01, 3.37], P<0.01) than control group. The back pain group demonstrated more force-matching error while increasing than decreasing force output (mean difference=1.74, P<0.001, 95%CI [0.87, 2.61]). A unit increase in force-matching error while decreasing force output is associated with a 47% increase in Oswestry score in back pain group (R 2 =0.19, P=0.006). Lumbar extensor muscle force control is compromised in chronic low back pain patients. Force-matching error predicts disability, confirming the validity of our force control protocol for chronic low back pain patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
Hart, Heledd; Lim, Lena; Mehta, Mitul A.; Curtis, Charles; Xu, Xiaohui; Breen, Gerome; Simmons, Andrew; Mirza, Kah; Rubia, Katya
2018-01-01
Childhood maltreatment is associated with error hypersensitivity. We examined the effect of childhood abuse and abuse-by-gene (5-HTTLPR, MAOA) interaction on functional brain connectivity during error processing in medication/drug-free adolescents. Functional connectivity was compared, using generalized psychophysiological interaction (gPPI) analysis of functional magnetic resonance imaging (fMRI) data, between 22 age- and gender-matched medication-naïve and substance abuse-free adolescents exposed to severe childhood abuse and 27 healthy controls, while they performed an individually adjusted tracking stop-signal task, designed to elicit 50% inhibition failures. During inhibition failures, abused participants relative to healthy controls exhibited reduced connectivity between right and left putamen, bilateral caudate and anterior cingulate cortex (ACC), and between right supplementary motor area (SMA) and right inferior and dorsolateral prefrontal cortex. Abuse-related connectivity abnormalities were associated with longer abuse duration. No group differences in connectivity were observed for successful inhibition. The findings suggest that childhood abuse is associated with decreased functional connectivity in fronto-cingulo-striatal networks during error processing. Furthermore that the severity of connectivity abnormalities increases with abuse duration. Reduced connectivity of error detection networks in maltreated individuals may be linked to constant monitoring of errors in order to avoid mistakes which, in abusive contexts, are often associated with harsh punishment. PMID:29434543
Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data
Pal, Doyel; Chen, Tingting; Khethavath, Praveen
2014-01-01
Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786
On the tautology of the matching law in consumer behavior analysis.
Curry, Bruce; Foxall, Gordon R; Sigurdsson, Valdimar
2010-05-01
Matching analysis has often attracted the criticism that it is formally tautological and hence empirically unfalsifiable, a problem that particularly affects translational attempts to extend behavior analysis into new areas. An example is consumer behavior analysis where application of matching in natural settings requires the inference of ratio-based relationships between amount purchased and amount spent. This gives rise to the argument that matching is an artifact of the way in which the alleged independent and dependent variables are defined and measured. We argue that the amount matching law would be tautological only in extreme circumstances (those in which prices or quantities move strictly in proportion); this is because of the presence of an error term in the matching function which arises from aggregation, particularly aggregation over brands. Cost matching is a viable complement of amount matching which avoids this tautology but a complete explanation of consumer choice requires a viable measure of amount matching also. This necessitates a more general solution to the problem of tautology in matching. In general, the fact that there remain doubts about the functional form of the matching equation itself implies the absence of a tautology. In proposing a general solution to the problem of assumed tautology in matching, the paper notes the experiences of matching researchers in another translation field, sports behavior. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.
2016-01-01
Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.
Position sense at the human elbow joint measured by arm matching or pointing.
Tsay, Anthony; Allen, Trevor J; Proske, Uwe
2016-10-01
Position sense at the human elbow joint has traditionally been measured in blindfolded subjects using a forearm matching task. Here we compare position errors in a matching task with errors generated when the subject uses a pointer to indicate the position of a hidden arm. Evidence from muscle vibration during forearm matching supports a role for muscle spindles in position sense. We have recently shown using vibration, as well as muscle conditioning, which takes advantage of muscle's thixotropic property, that position errors generated in a forearm pointing task were not consistent with a role by muscle spindles. In the present study we have used a form of muscle conditioning, where elbow muscles are co-contracted at the test angle, to further explore differences in position sense measured by matching and pointing. For fourteen subjects, in a matching task where the reference arm had elbow flexor and extensor muscles contracted at the test angle and the indicator arm had its flexors conditioned at 90°, matching errors lay in the direction of flexion by 6.2°. After the same conditioning of the reference arm and extension conditioning of the indicator at 0°, matching errors lay in the direction of extension (5.7°). These errors were consistent with predictions based on a role by muscle spindles in determining forearm matching outcomes. In the pointing task subjects moved a pointer to align it with the perceived position of the hidden arm. After conditioning of the reference arm as before, pointing errors all lay in a more extended direction than the actual position of the arm by 2.9°-7.3°, a distribution not consistent with a role by muscle spindles. We propose that in pointing muscle spindles do not play the major role in signalling limb position that they do in matching, but that other sources of sensory input should be given consideration, including afferents from skin and joint.
Functional Defects in Color Vision in Patients With Choroideremia.
Jolly, Jasleen K; Groppe, Markus; Birks, Jacqueline; Downes, Susan M; MacLaren, Robert E
2015-10-01
To characterize defects in color vision in patients with choroideremia. Prospective cohort study. Thirty patients with choroideremia (41 eyes) and 10 age-matched male controls (19 eyes) with visual acuity of ≥6/36 attending outpatient clinics in Oxford Eye Hospital underwent color vision testing with the Farnsworth-Munsell 100 hue test, visual acuity testing, and autofluorescence imaging. To exclude changes caused by degeneration of the fovea, a subgroup of 14 patients with a visual acuity ≥6/6 was analyzed. Calculated color vision total error scores were compared between the groups and related to a range of factors using a random-effects model. Mean color vision total error scores were 120 (95% confidence interval [CI] 92, 156) in the ≥6/6 choroideremia group, 206 (95% CI 161, 266) in the <6/6 visual acuity choroideremia group, and 47 (95% CI 32, 69) in the control group. Covariate analysis showed a significant difference in color vision total error score between the groups (P < .001 between each group). Patients with choroideremia have a functional defect in color vision compared with age-matched controls. The color vision defect deteriorates as the degeneration encroaches on the fovea. The presence of an early functional defect in color vision provides a useful biomarker against which to assess successful gene transfer in gene therapy trials. Copyright © 2015 Elsevier Inc. All rights reserved.
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Spelling in adolescents with dyslexia: errors and modes of assessment.
Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc
2014-01-01
In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia. © Hammill Institute on Disabilities 2012.
Evaluation of the CATSIB DIF Procedure in a Pretest Setting
ERIC Educational Resources Information Center
Nandakumar, Ratna; Roussos, Louis
2004-01-01
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
NASA Astrophysics Data System (ADS)
Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.
1992-08-01
Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure
Mathematics skills in good readers with hydrocephalus.
Barnes, Marcia A; Pengelly, Sarah; Dennis, Maureen; Wilkinson, Margaret; Rogers, Tracey; Faulkner, Heather
2002-01-01
Children with hydrocephalus have poor math skills. We investigated the nature of their arithmetic computation errors by comparing written subtraction errors in good readers with hydrocephalus, typically developing good readers of the same age, and younger children matched for math level to the children with hydrocephalus. Children with hydrocephalus made more procedural errors (although not more fact retrieval or visual-spatial errors) than age-matched controls; they made the same number of procedural errors as younger, math-level matched children. We also investigated a broad range of math abilities, and found that children with hydrocephalus performed more poorly than age-matched controls on tests of geometry and applied math skills such as estimation and problem solving. Computation deficits in children with hydrocephalus reflect delayed development of procedural knowledge. Problems in specific math domains such as geometry and applied math, were associated with deficits in constituent cognitive skills such as visual spatial competence, memory, and general knowledge.
NASA Astrophysics Data System (ADS)
Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.
1994-09-01
A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.
Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion
Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317
Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning
2014-01-01
In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
Artificial neural network implementation of a near-ideal error prediction controller
NASA Technical Reports Server (NTRS)
Mcvey, Eugene S.; Taylor, Lynore Denise
1992-01-01
A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error responses be known for a particular input and modeled plant. These responses are used in the error prediction controller. An analysis was done on the general dynamic behavior that results from including a digital error predictor in a control loop and these were compared to those including the near-ideal Neural Network error predictor. This analysis was done for a second and third order system.
The successively temporal error concealment algorithm using error-adaptive block matching principle
NASA Astrophysics Data System (ADS)
Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun
2014-09-01
Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.
Discriminability limits in spatio-temporal stereo block matching.
Jain, Ankit K; Nguyen, Truong Q
2014-05-01
Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.
Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems
NASA Astrophysics Data System (ADS)
Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen
2017-06-01
In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.
Goharpey, Nahal; Crewther, David P; Crewther, Sheila G
2013-12-01
This study investigated the developmental trajectory of problem solving ability in children with intellectual disability (ID) of different etiologies (Down Syndrome, Idiopathic ID or low functioning Autism) as measured on the Raven's Colored Progressive Matrices test (RCPM). Children with typical development (TD) and children with ID were matched on total correct performance (i.e., non-verbal mental age) on the RCPM. RCPM total correct performance and the sophistication of error types were found to be associated with receptive vocabulary in all participants, suggesting that verbal ability plays a role in more sophisticated problem solving tasks. Children with ID made similar errors on the RCPM as younger children with TD as well as more positional error types. This result suggests that children with ID who are deficient in their cognitive processing resort to developmentally immature problem solving strategies when unable to determine the correct answer. Overall, the findings support the use of RCPM as a valid means of matching intellectual capacity of children with TD and ID. Copyright © 2013 Elsevier Ltd. All rights reserved.
Coarse-graining errors and numerical optimization using a relative entropy framework
NASA Astrophysics Data System (ADS)
Chaimovich, Aviel; Shell, M. Scott
2011-03-01
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.
Which skills and factors better predict winning and losing in high-level men's volleyball?
Peña, Javier; Rodríguez-Guerra, Jorge; Buscà, Bernat; Serra, Núria
2013-09-01
The aim of this study was to determine which skills and factors better predicted the outcomes of regular season volleyball matches in the Spanish "Superliga" and were significant for obtaining positive results in the game. The study sample consisted of 125 matches played during the 2010-11 Spanish men's first division volleyball championship. Matches were played by 12 teams composed of 148 players from 17 different nations from October 2010 to March 2011. The variables analyzed were the result of the game, team category, home/away court factors, points obtained in the break point phase, number of service errors, number of service aces, number of reception errors, percentage of positive receptions, percentage of perfect receptions, reception efficiency, number of attack errors, number of blocked attacks, attack points, percentage of attack points, attack efficiency, and number of blocks performed by both teams participating in the match. The results showed that the variables of team category, points obtained in the break point phase, number of reception errors, and number of blocked attacks by the opponent were significant predictors of winning or losing the matches. Odds ratios indicated that the odds of winning a volleyball match were 6.7 times greater for the teams belonging to higher rankings and that every additional point in Complex II increased the odds of winning a match by 1.5 times. Every reception and blocked ball error decreased the possibility of winning by 0.6 and 0.7 times, respectively.
Kapalková, Svetlana; Slančová, Daniela
2017-01-01
This study compared a sample of children with primary language impairment (PLI) and typically developing age-matched children using the crosslinguistic lexical tasks (CLT-SK). We also compared the PLI children with typically developing language-matched younger children who were matched on the basis of receptive vocabulary. Overall, statistical testing showed that the vocabulary of the PLI children was significantly different from the vocabulary of the age-matched children, but not statistically different from the younger children who were matched on the basis of their receptive vocabulary size. Qualitative analysis of the correct answers revealed that the PLI children showed higher rigidity compared to the younger language-matched children who are able to use more synonyms or derivations across word class in naming tasks. Similarly, an examination of the children's naming errors indicated that the language-matched children exhibited more semantic errors, whereas PLI children showed more associative errors.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Syntactic Development in Children with Hemispherectomy: The I-, D-, And C-Systems
ERIC Educational Resources Information Center
Curtiss, S.; Schaeffer, J.
2005-01-01
This study reports on functional morpheme (I, D, and C) production in the spontaneous speech of five pairs of children who have undergone hemispherectomy, matching each pair for etiology and age at symptom onset, surgery, and testing. Our results show that following left hemispherectomy (LH), children evidence a greater error rate in the use of…
Craig, Megan; Trauner, Doris
2018-02-01
We aimed to characterize differences in the use of language in children with specific language impairment and high-functioning autism by analyzing verbal responses on standardized tests. The overall goal was to provide clinicians with additional tools with which to aid in distinguishing the two neurodevelopmental disorders. This study included 16 children with specific language impairment, 28 children with high-functioning autism, and 52 typically developing participants between the ages of six and 14. Groups were matched for age, and specific language impairment and high-functioning autism groups were matched on verbal and performance IQ. Responses from standardized tests were examined for response length, grammatical errors, filler words, perseverations, revisions (repeated attempts to begin or continue a sentence), off-topic attention shifts (lapses in attention to the task), and rambling. Data were analyzed using parametric and nonparametric methods. Specific language impairment responses were longer and contained more filler words than did those of the other two groups, whereas high-functioning autism responses exhibited more grammatical errors, off-topic attention shifts, and rambling. Specific language impairment and high-functioning autism responses showed higher rates of perseveration compared with controls. There were no significant differences in revisions among the three groups. Differences in language patterns of participants with specific language impairment and high-functioning autism may be useful to the clinician in helping to differentiate isolated language impairment from high-functioning autism. The results also support the conclusion that the two conditions are separable, and each exhibits a different pattern of language dysfunction. Copyright © 2017 Elsevier Inc. All rights reserved.
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
The effect of memory and context changes on color matches to real objects.
Allred, Sarah R; Olkkonen, Maria
2015-07-01
Real-world color identification tasks often require matching the color of objects between contexts and after a temporal delay, thus placing demands on both perceptual and memory processes. Although the mechanisms of matching colors between different contexts have been widely studied under the rubric of color constancy, little research has investigated the role of long-term memory in such tasks or how memory interacts with color constancy. To investigate this relationship, observers made color matches to real study objects that spanned color space, and we independently manipulated the illumination impinging on the objects, the surfaces in which objects were embedded, and the delay between seeing the study object and selecting its color match. Adding a 10-min delay increased both the bias and variability of color matches compared to a baseline condition. These memory errors were well accounted for by modeling memory as a noisy but unbiased version of perception constrained by the matching methods. Surprisingly, we did not observe significant increases in errors when illumination and surround changes were added to the 10-minute delay, although the context changes alone did elicit significant errors.
The statistical evaluation of duct tape end match as physical evidence
NASA Astrophysics Data System (ADS)
Chan, Ka Lok
Duct tapes are often submitted to crime laboratories as evidence associated with abductions, homicides, or construction of explosive devices. As a result, trace evidence examiners are often asked to analyze and compare commercial duct tapes so that they can establish possible evidentiary links. Duct tape end matches are believed to be the strongest association between exemplar and question samples because they are considered as evidence with unique individual characteristics. While end match analysis and comparison have long been undertaken by trace evidence examiners, there is a significant lack of scientific research for associating two or more segments of duct tapes. This study is designed to obtain statistical inferences on the uniqueness of duct tape tears. Three experiments were devised to compile the basis for a statistical assessment of the probability of duct tape end matches along with a proposed error rate. In one experiment, we conducted the equivalent of 10,000 end match examinations with an error rate of 0%. In the second experiment, we performed 2,704 end match examinations having 0% error rate. In the third experiment, using duct tape by an Elmendorf Tear tester, we conducted 576 end match examinations with an error rate of 0% and having all samples correctly associated. The results of this study indicate that end matches are distinguishable among a single roll of duct tape and between two different rolls of duct tape having very similar surface features and weave pattern.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Michalopoulos, C. D.
1976-01-01
An analysis of one and multidegree of freedom systems with classical damping is presented. Definition and minimization of error functions for each system are discussed. Systems with classical and nonclassical normal modes are studied, and results for first order perturbation are given. An alternative method of matching power spectral densities is provided, and numerical results are reviewed.
ERIC Educational Resources Information Center
DeMars, Christine E.
2009-01-01
The Mantel-Haenszel (MH) and logistic regression (LR) differential item functioning (DIF) procedures have inflated Type I error rates when there are large mean group differences, short tests, and large sample sizes.When there are large group differences in mean score, groups matched on the observed number-correct score differ on true score,…
ERIC Educational Resources Information Center
Ambridge, Ben; Bannard, Colin; Jackson, Georgina H.
2015-01-01
Children with Autism Spectrum Disorder (ASD) aged 11-13 (N = 16) and an IQ-matched typically developing (TD) group aged 7-12 (N = 16) completed a graded grammaticality judgment task, as well as a standardized test of cognitive function. In a departure from previous studies, the judgment task involved verb argument structure overgeneralization…
Coarse-graining errors and numerical optimization using a relative entropy framework.
Chaimovich, Aviel; Shell, M Scott
2011-03-07
The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.
Using string alignment in a query-by-humming system for real world applications
NASA Astrophysics Data System (ADS)
Sailer, Christian
2005-09-01
Though query by humming (i.e., retrieving music or information about music by singing a characteristic melody) has been a popular research topic during the past decade, few approaches have reached a level of usefulness beyond mere scientific interest. One of the main problems is the inherent contradiction between error tolerance and dicriminative power in conventional melody matching algorithms that rely on a melody contour approach to handle intonation or transcription errors. Adopting the string matching/alignment techniques from bioinformatics to melody sequences allows to directly assess the similarity between two melodies. This method takes an MPEG-7 compliant melody sequence (i.e., a list of note intervals and length ratios) as query and evaluates the steps necessary to transform it into the reference sequence. By introducing a musically founded cost-of-replace function and an adequate post processing, this method yields a measure for melodic similarity. Thus it is possible to construct a query by humming system that can properly discriminate between thousands of melodies and still be sufficiently error tolerant to be used by untrained singers. The robustness has been verified in extensive tests and real world applications.
Leveraging pattern matching to solve SRAM verification challenges at advanced nodes
NASA Astrophysics Data System (ADS)
Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan
2018-03-01
Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
Nonreflective Conditions for Perfectly Matched Layer in Computational Aeroacoustics
NASA Astrophysics Data System (ADS)
Choung, Hanahchim; Jang, Seokjong; Lee, Soogab
2018-05-01
In computational aeroacoustics, boundary conditions such as radiation, outflow, or absorbing boundary conditions are critical issues in that they can affect the entire solution of the computation. Among these types of boundary conditions, the perfectly matched layer boundary condition, which has been widely used in computational fluid dynamics and computational aeroacoustics, is developed by augmenting the additional term in the original governing equations by an absorption function so as to stably absorb the outgoing waves. Even if the perfectly matched layer is analytically a perfectly nonreflective boundary condition, spurious waves occur at the interface, since the analysis is performed in discretized space. Hence, this study is focused on factors that affect numerical errors from perfectly matched layer to find the optimum conditions for nonreflective PML. Through a mathematical approach, a minimum width of perfectly matched layer and an optimum absorption coefficient are suggested. To validate the prediction of the analysis, numerical simulations are performed in a generalized coordinate system, as well as in a Cartesian coordinate system.
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.
Thipphavong, David P
2016-09-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
Thipphavong, David P.
2017-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Thipphavong, David P.
2016-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
A mathematical approach to beam matching
Manikandan, A; Nandy, M; Gossman, M S; Sureka, C S; Ray, A; Sujatha, N
2013-01-01
Objective: This report provides the mathematical commissioning instructions for the evaluation of beam matching between two different linear accelerators. Methods: Test packages were first obtained including an open beam profile, a wedge beam profile and a depth–dose curve, each from a 10×10 cm2 beam. From these plots, a spatial error (SE) and a percentage dose error were introduced to form new plots. These three test package curves and the associated error curves were then differentiated in space with respect to dose for a first and second derivative to determine the slope and curvature of each data set. The derivatives, also known as bandwidths, were analysed to determine the level of acceptability for the beam matching test described in this study. Results: The open and wedged beam profiles and depth–dose curve in the build-up region were determined to match within 1% dose error and 1-mm SE at 71.4% and 70.8% for of all points, respectively. For the depth–dose analysis specifically, beam matching was achieved for 96.8% of all points at 1%/1 mm beyond the depth of maximum dose. Conclusion: To quantify the beam matching procedure in any clinic, the user needs to merely generate test packages from their reference linear accelerator. It then follows that if the bandwidths are smooth and continuous across the profile and depth, there is greater likelihood of beam matching. Differentiated spatial and percentage variation analysis is appropriate, ideal and accurate for this commissioning process. Advances in knowledge: We report a mathematically rigorous formulation for the qualitative evaluation of beam matching between linear accelerators. PMID:23995874
On the assimilation set-up of ASCAT soil moisture data for improving streamflow catchment simulation
NASA Astrophysics Data System (ADS)
Loizu, Javier; Massari, Christian; Álvarez-Mozos, Jesús; Tarpanelli, Angelica; Brocca, Luca; Casalí, Javier
2018-01-01
Assimilation of remotely sensed surface soil moisture (SSM) data into hydrological catchment models has been identified as a means to improve streamflow simulations, but reported results vary markedly depending on the particular model, catchment and assimilation procedure used. In this study, the influence of key aspects, such as the type of model, re-scaling technique and SSM observation error considered, were evaluated. For this aim, Advanced SCATterometer ASCAT-SSM observations were assimilated through the ensemble Kalman filter into two hydrological models of different complexity (namely MISDc and TOPLATS) run on two Mediterranean catchments of similar size (750 km2). Three different re-scaling techniques were evaluated (linear re-scaling, variance matching and cumulative distribution function matching), and SSM observation error values ranging from 0.01% to 20% were considered. Four different efficiency measures were used for evaluating the results. Increases in Nash-Sutcliffe efficiency (0.03-0.15) and efficiency indices (10-45%) were obtained, especially when linear re-scaling and observation errors within 4-6% were considered. This study found out that there is a potential to improve streamflow prediction through data assimilation of remotely sensed SSM in catchments of different characteristics and with hydrological models of different conceptualizations schemes, but for that, a careful evaluation of the observation error and re-scaling technique set-up utilized is required.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
Fast-match on particle swarm optimization with variant system mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuang; Fang, Xin; Chen, Jie
2018-03-01
Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.
AGILE: Autonomous Global Integrated Language Exploitation
2009-12-01
combination, including METEOR-based alignment (with stemming and WordNet synonym matching) and GIZA ++ based alignment. So far, we have not seen any...parse trees and a detailed analysis of how function words operate in translation. This program lets us fix alignment errors that systems like GIZA ...correlates better with Pyramid than with Responsiveness scoring (i.e., it is a more precise, careful, measure) • BE generally outperforms ROUGE
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
ERIC Educational Resources Information Center
LeBlanc, Judith M.
A sequence of studies compared two types of discrimination formation: errorless learning and trial-and-error procedures. The subjects were three boys and five girls from a university preschool. The children performed the experimental tasks at a typical match-to-sample apparatus with one sample window above and four match (response) windows below.…
Consensus of satellite cluster flight using an energy-matching optimal control method
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Zhou, Liang; Zhang, Bo
2017-11-01
This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.
Speeding up 3D speckle tracking using PatchMatch
NASA Astrophysics Data System (ADS)
Zontak, Maria; O'Donnell, Matthew
2016-03-01
Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.
Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers
ERIC Educational Resources Information Center
Abu-rabia, Salim; Taha, Haitham
2004-01-01
This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…
Haptic spatial matching in near peripersonal space.
Kaas, Amanda L; Mier, Hanneke I van
2006-04-01
Research has shown that haptic spatial matching at intermanual distances over 60 cm is prone to large systematic errors. The error pattern has been explained by the use of reference frames intermediate between egocentric and allocentric coding. This study investigated haptic performance in near peripersonal space, i.e. at intermanual distances of 60 cm and less. Twelve blindfolded participants (six males and six females) were presented with two turn bars at equal distances from the midsagittal plane, 30 or 60 cm apart. Different orientations (vertical/horizontal or oblique) of the left bar had to be matched by adjusting the right bar to either a mirror symmetric (/ \\) or parallel (/ /) position. The mirror symmetry task can in principle be performed accurately in both an egocentric and an allocentric reference frame, whereas the parallel task requires an allocentric representation. Results showed that parallel matching induced large systematic errors which increased with distance. Overall error was significantly smaller in the mirror task. The task difference also held for the vertical orientation at 60 cm distance, even though this orientation required the same response in both tasks, showing a marked effect of task instruction. In addition, men outperformed women on the parallel task. Finally, contrary to our expectations, systematic errors were found in the mirror task, predominantly at 30 cm distance. Based on these findings, we suggest that haptic performance in near peripersonal space might be dominated by different mechanisms than those which come into play at distances over 60 cm. Moreover, our results indicate that both inter-individual differences and task demands affect task performance in haptic spatial matching. Therefore, we conclude that the study of haptic spatial matching in near peripersonal space might reveal important additional constraints for the specification of adequate models of haptic spatial performance.
Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Y.; Song, Y.; Lu, J.
2018-05-01
Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Decreasing Errors in Reading-Related Matching to Sample Using a Delayed-Sample Procedure
ERIC Educational Resources Information Center
Doughty, Adam H.; Saunders, Kathryn J.
2009-01-01
Two men with intellectual disabilities initially demonstrated intermediate accuracy in two-choice matching-to-sample (MTS) procedures. A printed-letter identity MTS procedure was used with 1 participant, and a spoken-to-printed-word MTS procedure was used with the other participant. Errors decreased substantially under a delayed-sample procedure,…
Matching on the Disease Risk Score in Comparative Effectiveness Research of New Treatments
Wyss, Richard; Ellis, Alan R.; Brookhart, M. Alan; Funk, Michele Jonsson; Girman, Cynthia J.; Simpson, Ross J.; Stürmer, Til
2016-01-01
Purpose We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. Methods We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. Results In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. Conclusions When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. PMID:26112690
Matching on the disease risk score in comparative effectiveness research of new treatments.
Wyss, Richard; Ellis, Alan R; Brookhart, M Alan; Jonsson Funk, Michele; Girman, Cynthia J; Simpson, Ross J; Stürmer, Til
2015-09-01
We use simulations and an empirical example to evaluate the performance of disease risk score (DRS) matching compared with propensity score (PS) matching when controlling large numbers of covariates in settings involving newly introduced treatments. We simulated a dichotomous treatment, a dichotomous outcome, and 100 baseline covariates that included both continuous and dichotomous random variables. For the empirical example, we evaluated the comparative effectiveness of dabigatran versus warfarin in preventing combined ischemic stroke and all-cause mortality. We matched treatment groups on a historically estimated DRS and again on the PS. We controlled for a high-dimensional set of covariates using 20% and 1% samples of Medicare claims data from October 2010 through December 2012. In simulations, matching on the DRS versus the PS generally yielded matches for more treated individuals and improved precision of the effect estimate. For the empirical example, PS and DRS matching in the 20% sample resulted in similar hazard ratios (0.88 and 0.87) and standard errors (0.04 for both methods). In the 1% sample, PS matching resulted in matches for only 92.0% of the treated population and a hazard ratio and standard error of 0.89 and 0.19, respectively, while DRS matching resulted in matches for 98.5% and a hazard ratio and standard error of 0.85 and 0.16, respectively. When PS distributions are separated, DRS matching can improve the precision of effect estimates and allow researchers to evaluate the treatment effect in a larger proportion of the treated population. However, accurately modeling the DRS can be challenging compared with the PS. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawahara, D; Tsuda, S.; Section of Radiation Therapy, Department of Clinical Support, Hiroshima University Hospital Health
2014-06-01
Purpose: IGRT based on the bone matching may produce a larger target positioning error in terms of the reproducibility of the expiration breath hold. Therefore, the feasibility of the 3D image matching between planning CT image and pretreatment CBCT image based on the diaphragm matching was investigated. Methods: In fifteen-nine liver SBRT cases, Lipiodol, uptake after TACE was outlined as the marker of the tumor. The relative coordinate of the isocenter obtained by the contrast matching was defined as the reference coordinate. The target positioning difference between diaphragm matching and bone matching were evaluated by the relative coordinate of themore » isocenter from the reference coordinate obtained by each matching technique. In addition, we evaluated PTV margins by van Herk setup margin formula. Results: The target positioning error by the diaphragm matching and the bone matching was 1.31±0.83 and 3.10±2.80 mm in the cranial-caudal(C-C) direction, 1.04±0.95 and 1.62±1.02 mm in the anterior-posterior(A-P) direction, 0.93±1.19 and 1.12±0.94 mm in the left-right(L-R) direction, respectively. The positioning error by the diaphragm matching was significantly smaller than the bone matching in the C-C direction (p<0.05). The setup margin of diaphragm matching and bone matching that we had calculated based on van Herk margin formula was 4.5mm and 6.2mm(C-C), and 3.6mm and 6.3mm(A-P), and 2.6mm and 4.5mm(L-R), respectively. Conclusion: IGRT based on a diaphragm matching could be one alternative image matching technique for the positioning of the patients with liver tumor.« less
NASA Astrophysics Data System (ADS)
Park, Sang-Gon; Jeong, Dong-Seok
2000-12-01
In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.
Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level
Mitchell, Mark; Martin, Bernard J.; Adamo, Diane E.
2017-01-01
Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson’s Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory – motor information have been significantly altered by the disease process. PMID:28491047
Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level.
Mitchell, Mark; Martin, Bernard J; Adamo, Diane E
2017-01-01
Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson's Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory - motor information have been significantly altered by the disease process.
Human matching performance of genuine crime scene latent fingerprints.
Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J
2014-02-01
There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in court, but what the findings mean for reasoning about expert performance in the wild is an open, empirical, and epistemological question.
Frontal lobe function in chess players.
Nejati, Majid; Nejati, Vahid
2012-01-01
Chess is considered as a cognitive game because of severe engagement of the mental resources during playing. The purpose of this study is evaluation of frontal lobe function of chess players with matched non-players. Wisconsin Card Sorting Test (WCST) data showed no difference between the player and non-player groups in preservation error and completed categories but surprisingly showed significantly lower grade of the player group in correct response. Our data reveal that chess players don't have any preference in any stage of Stroop test. Chess players don't have any preference in selective attention, inhibition and executive cognitive function. Chess players' have lower shifting abilities than non-players.
The role of visual spatial attention in adult developmental dyslexia.
Collis, Nathan L; Kohnen, Saskia; Kinoshita, Sachiko
2013-01-01
The present study investigated the nature of visual spatial attention deficits in adults with developmental dyslexia, using a partial report task with five-letter, digit, and symbol strings. Participants responded by a manual key press to one of nine alternatives, which included other characters in the string, allowing an assessment of position errors as well as intrusion errors. The results showed that the dyslexic adults performed significantly worse than age-matched controls with letter and digit strings but not with symbol strings. Both groups produced W-shaped serial position functions with letter and digit strings. The dyslexics' deficits with letter string stimuli were limited to position errors, specifically at the string-interior positions 2 and 4. These errors correlated with letter transposition reading errors (e.g., reading slat as "salt"), but not with the Rapid Automatized Naming (RAN) task. Overall, these results suggest that the dyslexic adults have a visual spatial attention deficit; however, the deficit does not reflect a reduced span in visual-spatial attention, but a deficit in processing a string of letters in parallel, probably due to difficulty in the coding of letter position.
Optical phase-locked loop (OPLL) for free-space laser communications with heterodyne detection
NASA Technical Reports Server (NTRS)
Win, Moe Z.; Chen, Chien-Chung; Scholtz, Robert A.
1991-01-01
Several advantages of coherent free-space optical communications are outlined. Theoretical analysis is formulated for an OPLL disturbed by shot noise, modulation noise, and frequency noise consisting of a white component, a 1/f component, and a 1/f-squared component. Each of the noise components is characterized by its associated power spectral density. It is shown that the effect of modulation depends only on the ratio of loop bandwidth and data rate, and is negligible for an OPLL with loop bandwidth smaller than one fourth the data rate. Total phase error variance as a function of loop bandwidth is displayed for several values of carrier signal to noise ratio. Optimal loop bandwidth is also calculated as a function of carrier signal to noise ratio. An OPLL experiment is performed, where it is shown that the measured phase error variance closely matches the theoretical predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Cheng-Chung; Tsai, Tsung-Yuan; Hsu, Shih-Jung
2013-03-15
Purpose: The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. Methods: The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopymore » images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. Results: The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. Conclusions: The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.« less
NASA Astrophysics Data System (ADS)
Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji
Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.
A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning.
Loonis, Roman F; Brincat, Scott L; Antzoulatos, Evan G; Miller, Earl K
2017-10-11
A meta-analysis of non-human primates performing three different tasks (Object-Match, Category-Match, and Category-Saccade associations) revealed signatures of explicit and implicit learning. Performance improved equally following correct and error trials in the Match (explicit) tasks, but it improved more after correct trials in the Saccade (implicit) task, a signature of explicit versus implicit learning. Likewise, error-related negativity, a marker for error processing, was greater in the Match (explicit) tasks. All tasks showed an increase in alpha/beta (10-30 Hz) synchrony after correct choices. However, only the implicit task showed an increase in theta (3-7 Hz) synchrony after correct choices that decreased with learning. In contrast, in the explicit tasks, alpha/beta synchrony increased with learning and decreased thereafter. Our results suggest that explicit versus implicit learning engages different neural mechanisms that rely on different patterns of oscillatory synchrony. Copyright © 2017 Elsevier Inc. All rights reserved.
Doyle, Caoilainn; Smeaton, Alan F.; Roche, Richard A. P.; Boran, Lorraine
2018-01-01
To elucidate the core executive function profile (strengths and weaknesses in inhibition, updating, and switching) associated with dyslexia, this study explored executive function in 27 children with dyslexia and 29 age matched controls using sensitive z-mean measures of each ability and controlled for individual differences in processing speed. This study found that developmental dyslexia is associated with inhibition and updating, but not switching impairments, at the error z-mean composite level, whilst controlling for processing speed. Inhibition and updating (but not switching) error composites predicted both dyslexia likelihood and reading ability across the full range of variation from typical to atypical. The predictive relationships were such that those with poorer performance on inhibition and updating measures were significantly more likely to have a diagnosis of developmental dyslexia and also demonstrate poorer reading ability. These findings suggest that inhibition and updating abilities are associated with developmental dyslexia and predict reading ability. Future studies should explore executive function training as an intervention for children with dyslexia as core executive functions appear to be modifiable with training and may transfer to improved reading ability. PMID:29892245
Transfer Alignment Error Compensator Design Based on Robust State Estimation
NASA Astrophysics Data System (ADS)
Lyou, Joon; Lim, You-Chol
This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H∞ filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus
2017-03-01
Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.
mBEEF-vdW: Robust fitting of error estimation density functionals
NASA Astrophysics Data System (ADS)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
Braden, B Blair; Smith, Christopher J; Thompson, Amiee; Glaspy, Tyler K; Wood, Emily; Vatsa, Divya; Abbott, Angela E; McGee, Samuel C; Baxter, Leslie C
2017-12-01
There is a rapidly growing group of aging adults with autism spectrum disorder (ASD) who may have unique needs, yet cognitive and brain function in older adults with ASD is understudied. We combined functional and structural neuroimaging and neuropsychological tests to examine differences between middle-aged men with ASD and matched neurotypical (NT) men. Participants (ASD, n = 16; NT, n = 17) aged 40-64 years were well-matched according to age, IQ (range: 83-131), and education (range: 9-20 years). Middle-age adults with ASD made more errors on an executive function task (Wisconsin Card Sorting Test) but performed similarly to NT adults on tests of delayed verbal memory (Rey Auditory Verbal Learning Test) and local visual search (Embedded Figures Task). Independent component analysis of a functional MRI working memory task (n-back) completed by most participants (ASD = 14, NT = 17) showed decreased engagement of a cortico-striatal-thalamic-cortical neural network in older adults with ASD. Structurally, older adults with ASD had reduced bilateral hippocampal volumes, as measured by FreeSurfer. Findings expand our understanding of ASD as a lifelong condition with persistent cognitive and functional and structural brain differences evident at middle-age. Autism Res 2017, 10: 1945-1959. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. We compared cognitive abilities and brain measures between 16 middle-age men with high-functioning autism spectrum disorder (ASD) and 17 typical middle-age men to better understand how aging affects an older group of adults with ASD. Men with ASD made more errors on a test involving flexible thinking, had less activity in a flexible thinking brain network, and had smaller volume of a brain structure related to memory than typical men. We will follow these older adults over time to determine if aging changes are greater for individuals with ASD. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-02-04
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle's location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent.
Jiménez, Felipe; Monzón, Sergio; Naranjo, Jose Eugenio
2016-01-01
Vehicle positioning is a key factor for numerous information and assistance applications that are included in vehicles and for which satellite positioning is mainly used. However, this positioning process can result in errors and lead to measurement uncertainties. These errors come mainly from two sources: errors and simplifications of digital maps and errors in locating the vehicle. From that inaccurate data, the task of assigning the vehicle’s location to a link on the digital map at every instant is carried out by map-matching algorithms. These algorithms have been developed to fulfil that need and attempt to amend these errors to offer the user a suitable positioning. In this research; an algorithm is developed that attempts to solve the errors in positioning when the Global Navigation Satellite System (GNSS) signal reception is frequently lost. The algorithm has been tested with satisfactory results in a complex urban environment of narrow streets and tall buildings where errors and signal reception losses of the GPS receiver are frequent. PMID:26861320
Why a simulation system doesn`t match the plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, R.
1998-03-01
Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
Towards a Next-Generation Catalogue Cross-Match Service
NASA Astrophysics Data System (ADS)
Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium
2015-09-01
We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR-tree). Quantitative tests in comparison with the basic cross-match will be presented.
Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.
Temel, Burcin; Mills, Greg; Metiu, Horia
2008-03-27
We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments
NASA Astrophysics Data System (ADS)
Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.
2017-09-01
We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.
Salama, Mhd Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.
Mechanical design of a power-adjustable spectacle lens frame.
Zapata, Asuncion; Barbero, Sergio
2011-05-01
Power-adjustable spectacle lenses, based on the Alvarez-Lohmann principle, can be used to provide affordable spectacles for subjective refractive errors measurement and its correction. A new mechanical frame has been designed to maximize the advantages of this technology. The design includes a mechanism to match the interpupillary distance with that of the optical centers of the lenses. The frame can be manufactured using low cost plastic injection molding techniques. A prototype has been built to test the functioning of this mechanical design.
NASA Astrophysics Data System (ADS)
Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan
2016-10-01
This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattarivand, Mike; Summers, Clare; Robar, James
Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less
Concurrent variation of response bias and sensitivity in an operant-psychophysical test.
NASA Technical Reports Server (NTRS)
Terman, M.; Terman, J. S.
1972-01-01
The yes-no signal detection procedure was applied to a single-response operant paradigm in which rats discriminated between a standard auditory intensity and attenuated comparison values. The payoff matrix was symmetrical (with reinforcing brain stimulation for correct detections and brief time-out for errors), but signal probability and intensity differences were varied to generate a family of isobias and isosensitivity functions. The d' parameter remained fairly constant across a wide range of bias levels. Isobias functions deviated from a strict matching strategy as discrimination difficulty increased, although an orderly relation was maintained between signal probability value and the degree and direction of response bias.
Automated pulmonary lobar ventilation measurements using volume-matched thoracic CT and MRI
NASA Astrophysics Data System (ADS)
Guo, F.; Svenningsen, S.; Bluemke, E.; Rajchl, M.; Yuan, J.; Fenster, A.; Parraga, G.
2015-03-01
Objectives: To develop and evaluate an automated registration and segmentation pipeline for regional lobar pulmonary structure-function measurements, using volume-matched thoracic CT and MRI in order to guide therapy. Methods: Ten subjects underwent pulmonary function tests and volume-matched 1H and 3He MRI and thoracic CT during a single 2-hr visit. CT was registered to 1H MRI using an affine method that incorporated block-matching and this was followed by a deformable step using free-form deformation. The resultant deformation field was used to deform the associated CT lobe mask that was generated using commercial software. 3He-1H image registration used the same two-step registration method and 3He ventilation was segmented using hierarchical k-means clustering. Whole lung and lobar 3He ventilation and ventilation defect percent (VDP) were generated by mapping ventilation defects to CT-defined whole lung and lobe volumes. Target CT-3He registration accuracy was evaluated using region- , surface distance- and volume-based metrics. Automated whole lung and lobar VDP was compared with semi-automated and manual results using paired t-tests. Results: The proposed pipeline yielded regional spatial agreement of 88.0+/-0.9% and surface distance error of 3.9+/-0.5 mm. Automated and manual whole lung and lobar ventilation and VDP were not significantly different and they were significantly correlated (r = 0.77, p < 0.0001). Conclusion: The proposed automated pipeline can be used to generate regional pulmonary structural-functional maps with high accuracy and robustness, providing an important tool for image-guided pulmonary interventions.
ERIC Educational Resources Information Center
Shim, HyungSub; Hurley, Robert S.; Rogalski, Emily; Mesulam, M.-Marsel
2012-01-01
This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words,…
NASA Astrophysics Data System (ADS)
Bertin, Stephane; Friedrich, Heide; Delmas, Patrice; Chan, Edwin; Gimel'farb, Georgy
2015-03-01
Grain-scale monitoring of fluvial morphology is important for the evaluation of river system dynamics. Significant progress in remote sensing and computer performance allows rapid high-resolution data acquisition, however, applications in fluvial environments remain challenging. Even in a controlled environment, such as a laboratory, the extensive acquisition workflow is prone to the propagation of errors in digital elevation models (DEMs). This is valid for both of the common surface recording techniques: digital stereo photogrammetry and terrestrial laser scanning (TLS). The optimisation of the acquisition process, an effective way to reduce the occurrence of errors, is generally limited by the use of commercial software. Therefore, the removal of evident blunders during post processing is regarded as standard practice, although this may introduce new errors. This paper presents a detailed evaluation of a digital stereo-photogrammetric workflow developed for fluvial hydraulic applications. The introduced workflow is user-friendly and can be adapted to various close-range measurements: imagery is acquired with two Nikon D5100 cameras and processed using non-proprietary "on-the-job" calibration and dense scanline-based stereo matching algorithms. Novel ground truth evaluation studies were designed to identify the DEM errors, which resulted from a combination of calibration errors, inaccurate image rectifications and stereo-matching errors. To ensure optimum DEM quality, we show that systematic DEM errors must be minimised by ensuring a good distribution of control points throughout the image format during calibration. DEM quality is then largely dependent on the imagery utilised. We evaluated the open access multi-scale Retinex algorithm to facilitate the stereo matching, and quantified its influence on DEM quality. Occlusions, inherent to any roughness element, are still a major limiting factor to DEM accuracy. We show that a careful selection of the camera-to-object and baseline distance reduces errors in occluded areas and that realistic ground truths help to quantify those errors.
Plessen, Kerstin J.; Allen, Elena A.; Eichele, Heike; van Wageningen, Heidi; Høvik, Marie Farstad; Sørensen, Lin; Worren, Marius Kalsås; Hugdahl, Kenneth; Eichele, Tom
2016-01-01
Background We examined the blood-oxygen level–dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). Methods We acquired functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8–12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. Results We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions and higher RT variability, but no differences of interference control. Larger BOLD amplitude to error trials significantly predicted reduced RT variability across all participants. Neither group showed evidence of post-error response slowing; however, post-error adaptation in motor networks was significantly reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. Limitations Our study was limited by the modest sample size and imperfect matching across groups. Conclusion Our findings show a deficit in cingulo-opercular activation in children with ADHD that could relate to reduced signalling for errors. Moreover, the reduced orienting of the VAN signal may mediate deficient post-error motor adaptions. Pinpointing general performance monitoring problems to specific brain regions and operations in error processing may help to guide the targets of future treatments for ADHD. PMID:26441332
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
NOBLE - Flexible concept recognition for large-scale biomedical natural language processing.
Tseytlin, Eugene; Mitchell, Kevin; Legowski, Elizabeth; Corrigan, Julia; Chavan, Girish; Jacobson, Rebecca S
2016-01-14
Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines.
Goble, Daniel J; Mousigian, Marianne A; Brown, Susan H
2012-01-01
Perceiving the positions and movements of one's body segments (i.e., proprioception) is critical for movement control. However, this ability declines with older age as has been demonstrated by joint angle matching paradigms in the absence of vision. The aim of the present study was to explore the extent to which reduced working memory and attentional load influence older adult proprioceptive matching performance. Older adults with relatively HIGH versus LOW working memory ability as determined by backward digit span and healthy younger adults, performed memory-based elbow position matching with and without attentional load (i.e., counting by 3 s) during target position encoding. Even without attentional load, older adults with LOW digit spans (i.e., 4 digits or less) had larger matching errors than younger adults. Further, LOW older adults made significantly greater errors when attentional loads were present during proprioceptive target encoding as compared to both younger and older adults with HIGH digit span scores (i.e., 5 digits or greater). These results extend previous position matching results that suggested greater errors in older adults were due to degraded input signals from peripheral mechanoreceptors. Specifically, the present work highlights the role cognitive factors play in the assessment of older adult proprioceptive acuity using memory-based matching paradigms. Older adults with LOW working memory appear prone to compromised proprioceptive encoding, especially when secondary cognitive tasks must be concurrently executed. This may ultimately result in poorer performance on various activities of daily living.
Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors
Salama, Mhd. Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615
Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne
2016-01-01
Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication. The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.
Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne
2016-01-01
Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication. The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis. PMID:27635223
Teaching Identity Matching of Braille Characters to Beginning Braille Readers
ERIC Educational Resources Information Center
Toussaint, Karen A.; Scheithauer, Mindy C.; Tiger, Jeffrey H.; Saunders, Kathryn J.
2017-01-01
We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially…
Match graph generation for symbolic indirect correlation
NASA Astrophysics Data System (ADS)
Lopresti, Daniel; Nagy, George; Joshi, Ashutosh
2006-01-01
Symbolic indirect correlation (SIC) is a new approach for bringing lexical context into the recognition of unsegmented signals that represent words or phrases in printed or spoken form. One way of viewing the SIC problem is to find the correspondence, if one exists, between two bipartite graphs, one representing the matching of the two lexical strings and the other representing the matching of the two signal strings. While perfect matching cannot be expected with real-world signals and while some degree of mismatch is allowed for in the second stage of SIC, such errors, if they are too numerous, can present a serious impediment to a successful implementation of the concept. In this paper, we describe a framework for evaluating the effectiveness of SIC match graph generation and examine the relatively simple, controlled cases of synthetic images of text strings typeset, both normally and in highly condensed fashion. We quantify and categorize the errors that arise, as well as present a variety of techniques we have developed to visualize the intermediate results of the SIC process.
Infrequent identity mismatches are frequently undetected
Goldinger, Stephen D.
2014-01-01
The ability to quickly and accurately match faces to photographs bears critically on many domains, from controlling purchase of age-restricted goods to law enforcement and airport security. Despite its pervasiveness and importance, research has shown that face matching is surprisingly error prone. The majority of face-matching research is conducted under idealized conditions (e.g., using photographs of individuals taken on the same day) and with equal proportions of match and mismatch trials, a rate that is likely not observed in everyday face matching. In four experiments, we presented observers with photographs of faces taken an average of 1.5 years apart and tested whether face-matching performance is affected by the prevalence of identity mismatches, comparing conditions of low (10 %) and high (50 %) mismatch prevalence. Like the low-prevalence effect in visual search, we observed inflated miss rates under low-prevalence conditions. This effect persisted when participants were allowed to correct their initial responses (Experiment 2), when they had to verify every decision with a certainty judgment (Experiment 3) and when they were permitted “second looks” at face pairs (Experiment 4). These results suggest that, under realistic viewing conditions, the low-prevalence effect in face matching is a large, persistent source of errors. PMID:24500751
Salgado, Eduardo; Ribeiro, Fernando; Oliveira, José
2015-06-01
The demands to which football players are exposed during the match may augment the risk of injury by decreasing the sense of joint position. This study aimed to assess the effect of pre-participation warm-up and fatigue induced by an official football match on the knee-joint-position sense of football players. Fourteen semi-professional male football players (mean age: 25.9±4.6 years old) volunteered in this study. The main outcome measures were rate of perceived exertion and knee-joint-position sense assessed at rest, immediately after a standard warm-up (duration 25 min), and immediately after a competitive football match (90 minutes duration). Perceived exertion increased significantly from rest to the other assessments (rest: 8.6±2.0; after warm-up: 12.1±2.1; after football match: 18.5±1.3; p<0.001). Compared to rest, absolute angular error decreased significantly after the warm-up (4.1°±2.2° vs. 2.0°±1.0°; p=0.0045). After the match, absolute angular error (8.7°±3.8°) increased significantly comparatively to both rest (p=0.001) and the end of warm-up (p<0.001). Relative error showed directional bias with an underestimation of the target position, which was higher after the football match compared to both rest (p<0.001) and after warm-up (p<0.001). The results indicate that knee-joint-position sense acuity was increased by pre-participation warm-up exercise and was decreased by football match-induced fatigue. Warm-up exercises could contribute to knee injury prevention, whereas the deleterious effect of match-induced fatigue on the sensorimotor system could ultimately contribute to knee instability and injury. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Personal identification based on prescription eyewear.
Berg, Gregory E; Collins, Randall S
2007-03-01
This study presents a web-based tool that can be used to assist in identification of unknown individuals using spectacle prescriptions. Currently, when lens prescriptions are used in forensic identifications, investigators are constrained to a simple "match" or "no-match" judgment with an antemortem prescription. It is not possible to evaluate the strength of the conclusion, or rather, the potential or real error rates associated with the conclusion. Three databases totaling over 385,000 individual prescriptions are utilized in this study to allow forensic analysts to easily determine the strength of individuation of a spectacle match to antemortem records by calculating the frequency at which the observed prescription occurs in various U.S. populations. Optical refractive errors are explained, potential states and combinations of refractive errors are described, measuring lens corrections is discussed, and a detailed description of the databases is presented. The practical application of this system is demonstrated using two recent forensic identifications. This research provides a valuable personal identification tool that can be used in cases where eyeglass portions are recovered in forensic contexts.
Lowrey, Catherine R; Strzalkowski, Nick D J; Bent, Leah R
2010-11-12
Previous research has shown that skin is capable of providing kinesthetic cues at particular joints but we are unsure how these cues are used by the central nervous system. The current study attempted to identify the role of skin on the dorsum of the ankle during a joint matching task. A 30cm patch of skin was anesthetized and matching accuracy in a passive joint matching task was compared before and after skin anesthetization. Goniometers were used to measure ankle angular displacement. Four target angles were used in the matching task, 7° of dorsiflexion, 7°, 14° and 21° of plantarflexion. We hypothesized that, based on the location of skin anesthetized, only the plantarflexion matching tasks would be affected. Absolute error (accuracy) increased significantly for all angles when the skin was anesthetized. Directional error indicated that overall subjects tended to undershoot the target angles, significantly more so for 21° of plantarflexion when the skin was anesthetized. Following anesthetization, variable error (measure of task difficulty) increased significantly at 7° of dorsiflexion and 21° of plantarflexion. These results indicate that the subjects were less accurate and more variable when skin sensation was reduced suggesting that skin information plays an important role in kinesthesia at the ankle. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Carroll, T. A.; Strassmeier, K. G.
2014-03-01
Context. In recent years, we have seen a rapidly growing number of stellar magnetic field detections for various types of stars. Many of these magnetic fields are estimated from spectropolarimetric observations (Stokes V) by using the so-called center-of-gravity (COG) method. Unfortunately, the accuracy of this method rapidly deteriorates with increasing noise and thus calls for a more robust procedure that combines signal detection and field estimation. Aims: We introduce an estimation method that provides not only the effective or mean longitudinal magnetic field from an observed Stokes V profile but also uses the net absolute polarization of the profile to obtain an estimate of the apparent (i.e., velocity resolved) absolute longitudinal magnetic field. Methods: By combining the COG method with an orthogonal-matching-pursuit (OMP) approach, we were able to decompose observed Stokes profiles with an overcomplete dictionary of wavelet-basis functions to reliably reconstruct the observed Stokes profiles in the presence of noise. The elementary wave functions of the sparse reconstruction process were utilized to estimate the effective longitudinal magnetic field and the apparent absolute longitudinal magnetic field. A multiresolution analysis complements the OMP algorithm to provide a robust detection and estimation method. Results: An extensive Monte-Carlo simulation confirms the reliability and accuracy of the magnetic OMP approach where a mean error of under 2% is found. Its full potential is obtained for heavily noise-corrupted Stokes profiles with signal-to-noise variance ratios down to unity. In this case a conventional COG method yields a mean error for the effective longitudinal magnetic field of up to 50%, whereas the OMP method gives a maximum error of 18%. It is, moreover, shown that even in the case of very small residual noise on a level between 10-3 and 10-5, a regime reached by current multiline reconstruction techniques, the conventional COG method incorrectly interprets a large portion of the residual noise as a magnetic field, with values of up to 100 G. The magnetic OMP method, on the other hand, remains largely unaffected by the noise, regardless of the noise level the maximum error is no greater than 0.7 G.
A GPU-based symmetric non-rigid image registration method in human lung.
Haghighi, Babak; D Ellingwood, Nathan; Yin, Youbing; Hoffman, Eric A; Lin, Ching-Long
2018-03-01
Quantitative computed tomography (QCT) of the lungs plays an increasing role in identifying sub-phenotypes of pathologies previously lumped into broad categories such as chronic obstructive pulmonary disease and asthma. Methods for image matching and linking multiple lung volumes have proven useful in linking structure to function and in the identification of regional longitudinal changes. Here, we seek to improve the accuracy of image matching via the use of a symmetric multi-level non-rigid registration employing an inverse consistent (IC) transformation whereby images are registered both in the forward and reverse directions. To develop the symmetric method, two similarity measures, the sum of squared intensity difference (SSD) and the sum of squared tissue volume difference (SSTVD), were used. The method is based on a novel generic mathematical framework to include forward and backward transformations, simultaneously, eliminating the need to compute the inverse transformation. Two implementations were used to assess the proposed method: a two-dimensional (2-D) implementation using synthetic examples with SSD, and a multi-core CPU and graphics processing unit (GPU) implementation with SSTVD for three-dimensional (3-D) human lung datasets (six normal adults studied at total lung capacity (TLC) and functional residual capacity (FRC)). Success was evaluated in terms of the IC transformation consistency serving to link TLC to FRC. 2-D registration on synthetic images, using both symmetric and non-symmetric SSD methods, and comparison of displacement fields showed that the symmetric method gave a symmetrical grid shape and reduced IC errors, with the mean values of IC errors decreased by 37%. Results for both symmetric and non-symmetric transformations of human datasets showed that the symmetric method gave better results for IC errors in all cases, with mean values of IC errors for the symmetric method lower than the non-symmetric methods using both SSD and SSTVD. The GPU version demonstrated an average of 43 times speedup and ~5.2 times speedup over the single-threaded and 12-threaded CPU versions, respectively. Run times with the GPU were as fast as 2 min. The symmetric method improved the inverse consistency, aiding the use of image registration in the QCT-based evaluation of the lung.
Illusory conjunctions of pitch and duration in unfamiliar tone sequences.
Thompson, W F; Hall, M D; Pressing, J
2001-02-01
In 3 experiments, the authors examined short-term memory for pitch and duration in unfamiliar tone sequences. Participants were presented a target sequence consisting of 2 tones (Experiment 1) or 7 tones (Experiments 2 and 3) and then a probe tone. Participants indicated whether the probe tone matched 1 of the target tones in both pitch and duration. Error rates were relatively low if the probe tone matched 1 of the target tones or if it differed from target tones in pitch, duration, or both. Error rates were remarkably high, however, if the probe tone combined the pitch of 1 target tone with the duration of a different target tone. The results suggest that illusory conjunctions of these dimensions frequently occur. A mathematical model is presented that accounts for the relative contribution of pitch errors, duration errors, and illusory conjunctions of pitch and duration.
Shields, Richard K.; Dudley-Javoroski, Shauna; Boaldin, Kathryn M.; Corey, Trent A.; Fog, Daniel B.; Ruen, Jacquelyn M.
2012-01-01
Objectives To determine (1) the error attributable to external tibia-length measurements by using peripheral quantitative computed tomography (pQCT) and (2) the effect these errors have on scan location and tibia trabecular bone mineral density (BMD) after spinal cord injury (SCI). Design Blinded comparison and criterion standard in matched cohorts. Setting Primary care university hospital. Participants Eight able-bodied subjects underwent tibia length measurement. A separate cohort of 7 men with SCI and 7 able-bodied age-matched male controls underwent pQCT analysis. Interventions Not applicable. Main Outcome Measures The projected worst-case tibia-length–measurement error translated into a pQCT slice placement error of ±3mm. We collected pQCT slices at the distal 4% tibia site, 3mm proximal and 3mm distal to that site, and then quantified BMD error attributable to slice placement. Results Absolute BMD error was greater for able-bodied than for SCI subjects (5.87mg/cm3 vs 4.5mg/cm3). However, the percentage error in BMD was larger for SCI than able-bodied subjects (4.56% vs 2.23%). Conclusions During cross-sectional studies of various populations, BMD differences up to 5% may be attributable to variation in limb-length–measurement error. PMID:17023249
A Semantic Analysis of XML Schema Matching for B2B Systems Integration
ERIC Educational Resources Information Center
Kim, Jaewook
2011-01-01
One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
An Attempt at Matching Waking Events Into Dream Reports by Independent Judges
Wang, Jia Xi; Shen, He Yong
2018-01-01
Correlations between memories and dreaming has typically been studied by linking conscious experiences and dream reports, which has illustrated that dreaming reflects waking life events, thoughts, and emotions. As some research suggests that sleep has a function of memory consolidation, and dreams reflect this, researching this relationship further may uncover more useful insights. However, most related research has been conducted using the self-report method which asks participants to judge the relationship between their own conscious experiences and dreams. This method may cause errors when the research purpose is to make comparisons between different groups, because individual differences cannot be balanced out when the results are compared among groups. Based on a knowledge of metaphors and symbols, we developed two operationalized definitions for independent judges to match conscious experiences and dreams, the descriptive incorporation and the metaphorical incorporation, and tested their reliability for the matching purpose. Two independent judges were asked to complete a linking task for 212 paired event-dreams. Results showed almost half dreams can be matched by independent judges, and the independent-judge method could provide similar proportions for the linking task, when compared with the self-report method. PMID:29681873
Robotic identification of kinesthetic deficits after stroke.
Semrau, Jennifer A; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P
2013-12-01
Kinesthesia, the sense of body motion, is essential to proper control and execution of movement. Despite its importance for activities of daily living, no current clinical measures can objectively measure kinesthetic deficits. The goal of this study was to use robotic technology to quantify prevalence and severity of kinesthetic deficits of the upper limb poststroke. Seventy-four neurologically intact subjects and 113 subjects with stroke (62 left-affected, 51 right-affected) performed a robot-based kinesthetic matching task with vision occluded. The robot moved the most affected arm at a preset speed, direction, and magnitude. Subjects were instructed to mirror-match the movement with their opposite arm (active arm). A large number of subjects with stroke were significantly impaired on measures of kinesthesia. We observed impairments in ability to match movement direction (69% and 49% impaired for left- and right-affected subjects, respectively) and movement magnitude (42% and 31%). We observed impairments to match movement speed (32% and 27%) and increased response latencies (48% and 20%). Movement direction errors and response latencies were related to clinical measures of function, motor recovery, and dexterity. Using a robotic approach, we found that 61% of acute stroke survivors (n=69) had kinesthetic deficits. Additionally, these deficits were highly related to existing clinical measures, suggesting the importance of kinesthesia in day-to-day function. Our methods allow for more sensitive, accurate, and objective identification of kinesthetic deficits after stroke. With this information, we can better inform clinical treatment strategies to improve poststroke rehabilitative care and outcomes.
Processing of ICARTT Data Files Using Fuzzy Matching and Parser Combinators
NASA Technical Reports Server (NTRS)
Rutherford, Matthew T.; Typanski, Nathan D.; Wang, Dali; Chen, Gao
2014-01-01
In this paper, the task of parsing and matching inconsistent, poorly formed text data through the use of parser combinators and fuzzy matching is discussed. An object-oriented implementation of the parser combinator technique is used to allow for a relatively simple interface for adapting base parsers. For matching tasks, a fuzzy matching algorithm with Levenshtein distance calculations is implemented to match string pair, which are otherwise difficult to match due to the aforementioned irregularities and errors in one or both pair members. Used in concert, the two techniques allow parsing and matching operations to be performed which had previously only been done manually.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...
Kaplan, Johanna S; Erickson, Kristine; Luckenbaugh, David A; Weiland-Fiedler, Petra; Geraci, Marilla; Sahakian, Barbara J; Charney, Dennis; Drevets, Wayne C; Neumeister, Alexander
2006-10-01
Neuropsychological studies have provided evidence for deficits in psychiatric disorders, such as schizophrenia and mood disorders. However, neuropsychological function in Panic Disorder (PD) or PD with a comorbid diagnosis of Major Depressive Disorder (MDD) has not been comprehensively studied. The present study investigated neuropsychological functioning in patients with PD and PD + MDD by focusing on tasks that assess attention, psychomotor speed, executive function, decision-making, and affective processing. Twenty-two unmedicated patients with PD, eleven of whom had a secondary diagnosis of MDD, were compared to twenty-two healthy controls, matched for gender, age, and intelligence on tasks of attention, memory, psychomotor speed, executive function, decision-making, and affective processing from the Cambridge Neuropsychological Test Automated Battery (CANTAB), Cambridge Gamble Task, and Affective Go/No-go Task. Relative to matched healthy controls, patients with PD + MDD displayed an attentional bias toward negatively-valenced verbal stimuli (Affective Go/No-go Task) and longer decision-making latencies (Cambridge Gamble Task). Furthermore, the PD + MDD group committed more errors on a task of memory and visual discrimination compared to their controls. In contrast, no group differences were found for PD patients relative to matched control subjects. The sample size was limited, however, all patients were drug-free at the time of testing. The PD + MDD patients demonstrated deficits on a task involving visual discrimination and working memory, and an attentional bias towards negatively-valenced stimuli. In addition, patients with comorbid depression provided qualitatively different responses in the areas of affective and decision-making processes.
Martin, Markus; Dressing, Andrea; Bormann, Tobias; Schmidt, Charlotte S M; Kümmerer, Dorothee; Beume, Lena; Saur, Dorothee; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius
2017-08-01
The study aimed to elucidate areas involved in recognizing tool-associated actions, and to characterize the relationship between recognition and active performance of tool use.We performed voxel-based lesion-symptom mapping in a prospective cohort of 98 acute left-hemisphere ischemic stroke patients (68 male, age mean ± standard deviation, 65 ± 13 years; examination 4.4 ± 2 days post-stroke). In a video-based test, patients distinguished correct tool-related actions from actions with spatio-temporal (incorrect grip, kinematics, or tool orientation) or conceptual errors (incorrect tool-recipient matching, e.g., spreading jam on toast with a paintbrush). Moreover, spatio-temporal and conceptual errors were determined during actual tool use.Deficient spatio-temporal error discrimination followed lesions within a dorsal network in which the inferior parietal lobule (IPL) and the lateral temporal cortex (sLTC) were specifically relevant for assessing functional hand postures and kinematics, respectively. Conversely, impaired recognition of conceptual errors resulted from damage to ventral stream regions including anterior temporal lobe. Furthermore, LTC and IPL lesions impacted differently on action recognition and active tool use, respectively.In summary, recognition of tool-associated actions relies on a componential network. Our study particularly highlights the dissociable roles of LTC and IPL for the recognition of action kinematics and functional hand postures, respectively. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
New Treatment of Strongly Anisotropic Scattering Phase Functions: The Delta-M+ Method
NASA Astrophysics Data System (ADS)
Stamnes, K. H.; Lin, Z.; Chen, N.; Fan, Y.; Li, W.; Stamnes, S.
2017-12-01
The treatment of strongly anisotropic scattering phase functions is still a challenge for accurate radiance computations. The new Delta-M+ method resolves this problem by introducing a reliable, fast, accurate, and easy-to-use Legendre expansion of the scattering phase function with modified moments. Delta-M+ is an upgrade of the widely-used Delta-M method that truncates the forward scattering cone into a Dirac-delta-function (a direct beam), where the + symbol indicates that it essentially matches moments above the first 2M terms. Compared with the original Delta-M method, Delta-M+ has the same computational efficiency, but the accuracy has been increased dramatically. Tests show that the errors for strongly forward-peaked scattering phase functions are greatly reduced. Furthermore, the accuracy and stability of radiance computations are also significantly improved by applying the new Delta-M+ method.
Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching
NASA Astrophysics Data System (ADS)
Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest
2017-09-01
A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k = 2) at 1 MHz and 0.5 part in 106 (k = 2) at 100 kHz is within reach.
Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.
2018-05-01
One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.
Ghawami, Heshmatollah; Sadeghi, Sadegh; Raghibi, Mahvash; Rahimi-Movaghar, Vafa
2017-01-01
Executive dysfunctions are among the most prevalent neurobehavioral sequelae of traumatic brain injuries (TBIs). Using culturally validated tests from the Delis-Kaplan Executive Function System (D-KEFS: Trail Making, Verbal Fluency, Design Fluency, Sorting, Twenty Questions, and Tower) and the Behavioural Assessment of the Dysexecutive Syndrome (BADS: Rule Shift Cards, Key Search, and Modified Six Elements), the current study was the first to examine executive functioning in a group of Iranian TBI patients with focal frontal contusions. Compared with a demographically matched normative sample, the frontal contusion patients showed substantial impairments, with very large effect sizes (p ≤ .003, 1.56 < d < 3.12), on all the executive measures. Controlling for respective lower-level/fundamental conditions, the differences on the highest-level executive (cognitive switching) conditions were still significant. The frontal patients also committed more errors. Patients with lateral prefrontal (LPFC) contusions were qualitatively worst. For example, only the LPFC patients committed perseverative repetition errors. Altogether, our results support the notion that the frontal lobes, specifically the lateral prefrontal regions, play a critical role in cognitive executive functioning, over and above the contributions of respective lower-level cognitive abilities. The results provide clinical evidence for validity of the cross-culturally adapted versions of the tests.
Reversal and Rotation Errors by Normal and Retarded Readers
ERIC Educational Resources Information Center
Black, F. William
1973-01-01
Reports an investigation of the incidence of and relationships among word and letter reversals in writing and Bender-Gestalt rotation errors in matched samples of normal and retarded readers. No significant diffenences were found in the two groups. (TO)
Optical Rotation Curves and Linewidths for Tully-Fisher Applications
NASA Astrophysics Data System (ADS)
Courteau, Stephane
1997-12-01
We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.
Performance of Disease Risk Score Matching in Nested Case-Control Studies: A Simulation Study.
Desai, Rishi J; Glynn, Robert J; Wang, Shirley; Gagne, Joshua J
2016-05-15
In a case-control study, matching on a disease risk score (DRS), which includes many confounders, should theoretically result in greater precision than matching on only a few confounders; however, this has not been investigated. We simulated 1,000 hypothetical cohorts with a binary exposure, a time-to-event outcome, and 13 covariates. Each cohort comprised 2 subcohorts of 10,000 patients each: a historical subcohort and a concurrent subcohort. DRS were estimated in the historical subcohorts and applied to the concurrent subcohorts. Nested case-control studies were conducted in the concurrent subcohorts using incidence density sampling with 2 strategies-matching on age and sex, with adjustment for additional confounders, and matching on DRS-followed by conditional logistic regression for 9 outcome-exposure incidence scenarios. In all scenarios, DRS matching yielded lower average standard errors and mean squared errors than did matching on age and sex. In 6 scenarios, DRS matching also resulted in greater empirical power. DRS matching resulted in less relative bias than did matching on age and sex at lower outcome incidences but more relative bias at higher incidences. Post-hoc analysis revealed that the effect of DRS model misspecification might be more pronounced at higher outcome incidences, resulting in higher relative bias. These results suggest that DRS matching might increase the statistical efficiency of case-control studies, particularly when the outcome is rare. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Vilà-Balló, Adrià; Hdez-Lafuente, Prado; Rostan, Carles; Cunillera, Toni; Rodriguez-Fornells, Antoni
2014-10-01
Performance monitoring is crucial for well-adapted behavior. Offenders typically have a pervasive repetition of harmful-impulsive behaviors, despite an awareness of the negative consequences of their actions. However, the link between performance monitoring and aggressive behavior in juvenile offenders has not been closely investigated. Event-related brain potentials (ERPs) were used to investigate performance monitoring in juvenile non-psychopathic violent offenders compared with a well-matched control group. Two ERP components associated with error monitoring, error-related negativity (ERN) and error-positivity (Pe), and two components related to inhibitory processing, the stop-N2 and stop-P3 components, were evaluated using a combined flanker-stop-signal task. The results showed that the amplitudes of the ERN, the stop-N2, the stop-P3, and the standard P3 components were clearly reduced in the offenders group. Remarkably, no differences were observed for the Pe. At the behavioral level, slower stop-signal reaction times were identified for offenders, which indicated diminished inhibitory processing. The present results suggest that the monitoring of one's own behavior is affected in juvenile violent offenders. Specifically, we determined that different aspects of executive function were affected in the studied offenders, including error processing (reduced ERN) and response inhibition (reduced N2 and P3). However, error awareness and compensatory post-error adjustment processes (error correction) were unaffected. The current pattern of results highlights the role of performance monitoring in the acquisition and maintenance of externalizing harmful behavior that is frequently observed in juvenile offenders. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
Galaxy And Mass Assembly (GAMA): AUTOZ spectral redshift measurements, confidence and errors
NASA Astrophysics Data System (ADS)
Baldry, I. K.; Alpaslan, M.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Cluver, M. E.; Croom, S. M.; Davies, L. J. M.; Driver, S. P.; Gunawardhana, M. L. P.; Holwerda, B. W.; Hopkins, A. M.; Kelvin, L. S.; Liske, J.; López-Sánchez, Á. R.; Loveday, J.; Norberg, P.; Peacock, J.; Robotham, A. S. G.; Taylor, E. N.
2014-07-01
The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over 230 000 targets using the Anglo-Australian Telescope. To homogenize the redshift measurements and improve the reliability, a fully automatic redshift code was developed (AUTOZ). The measurements were made using a cross-correlation method for both the absorption- and the emission-line spectra. Large deviations in the high-pass-filtered spectra are partially clipped in order to be robust against uncorrected artefacts and to reduce the weight given to single-line matches. A single figure of merit (FOM) was developed that puts all template matches on to a similar confidence scale. The redshift confidence as a function of the FOM was fitted with a tanh function using a maximum likelihood method applied to repeat observations of targets. The method could be adapted to provide robust automatic redshifts for other large galaxy redshift surveys. For the GAMA survey, there was a substantial improvement in the reliability of assigned redshifts and in the lowering of redshift uncertainties with a median velocity uncertainty of 33 km s-1.
A study and simulation of the impact of high-order aberrations to overlay error distribution
NASA Astrophysics Data System (ADS)
Sun, G.; Wang, F.; Zhou, C.
2011-03-01
With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
Huang, Cheng-Ya; Lin, Linda L.; Hwang, Ing-Shiou
2017-01-01
The aged brain may not make good use of central resources, so dual task performance may be degraded. From the brain connectome perspective, this study investigated dual task deficits of older adults that lead to task failure of a suprapostural motor task with increasing postural destabilization. Twelve younger (mean age: 25.3 years) and 12 older (mean age: 65.8 years) adults executed a designated force-matching task from a level-surface or a stabilometer board. Force-matching error, stance sway, and event-related potential (ERP) in the preparatory period were measured. The force-matching accuracy and the size of postural sway of the older adults tended to be more vulnerable to stance configuration than that of the young adults, although both groups consistently showed greater attentional investment on the postural task as sway regularity increased in the stabilometer condition. In terms of the synchronization likelihood (SL) of the ERP, both younger and older adults had net increases in the strengths of the functional connectivity in the whole brain and in the fronto-sensorimotor network in the stabilometer condition. Also, the SL in the fronto-sensorimotor network of the older adults was greater than that of the young adults for both stance conditions. However, unlike the young adults, the older adults did not exhibit concurrent deactivation of the functional connectivity of the left temporal-parietal-occipital network for postural-suprapostural task with increasing postural load. In addition, the older adults potentiated functional connectivity of the right prefrontal area to cope with concurrent force-matching with increasing postural load. In conclusion, despite a universal negative effect on brain volume conduction, our preliminary results showed that the older adults were still capable of increasing allocation of neural sources, particularly via compensatory recruitment of the right prefrontal loop, for concurrent force-matching under the challenging postural condition. Nevertheless, dual-task performance of the older adults tended to be more vulnerable to postural load than that of the younger adults, in relation to inferior neural economy or a slow adaptation process to stance destabilization for scant dissociation of control hubs in the temporal-parietal-occipital cortex. PMID:28446874
Morphing Compression Garments for Space Medicine and Extravehicular Activity Using Active Materials.
Holschuh, Bradley T; Newman, Dava J
2016-02-01
Compression garments tend to be difficult to don/doff, due to their intentional function of squeezing the wearer. This is especially true for compression garments used for space medicine and for extravehicular activity (EVA). We present an innovative solution to this problem by integrating shape changing materials-NiTi shape memory alloy (SMA) coil actuators formed into modular, 3D-printed cartridges-into compression garments to produce garments capable of constricting on command. A parameterized, 2-spring analytic counterpressure model based on 12 garment and material inputs was developed to inform garment design. A methodology was developed for producing novel SMA cartridge systems to enable active compression garment construction. Five active compression sleeve prototypes were manufactured and tested: each sleeve was placed on a rigid cylindrical object and counterpressure was measured as a function of spatial location and time before, during, and after the application of a step voltage input. Controllable active counterpressures were measured up to 34.3 kPa, exceeding the requirement for EVA life support (29.6 kPa). Prototypes which incorporated fabrics with linear properties closely matched analytic model predictions (4.1%/-10.5% error in passive/active pressure predictions); prototypes using nonlinear fabrics did not match model predictions (errors >100%). Pressure non-uniformities were observed due to friction and the rigid SMA cartridge structure. To our knowledge this is the first demonstration of controllable compression technology incorporating active materials, a novel contribution to the field of compression garment design. This technology could lead to easy-to-don compression garments with widespread space and terrestrial applications.
Reduction of Orifice-Induced Pressure Errors
NASA Technical Reports Server (NTRS)
Plentovich, Elizabeth B.; Gloss, Blair B.; Eves, John W.; Stack, John P.
1987-01-01
Use of porous-plug orifice reduces or eliminates errors, induced by orifice itself, in measuring static pressure on airfoil surface in wind-tunnel experiments. Piece of sintered metal press-fitted into static-pressure orifice so it matches surface contour of model. Porous material reduces orifice-induced pressure error associated with conventional orifice of same or smaller diameter. Also reduces or eliminates additional errors in pressure measurement caused by orifice imperfections. Provides more accurate measurements in regions with very thin boundary layers.
Henderson, Heather A; Ono, Kim E; McMahon, Camilla M; Schwartz, Caley B; Usher, Lauren V; Mundy, Peter C
2015-02-01
The ability to regulate behaviors and emotions depends in part on the ability to flexibly monitor one's own progress toward a goal. Atypical patterns of response monitoring have been reported in individuals with autism spectrum disorders (ASD). In the current study we examined the error related negativity (ERN), an electrophysiological index of response monitoring, in relation to behavioral, social cognitive, and emotional presentation in higher functioning children (8-16 years) diagnosed with autism (HFA: N = 38) and an age- and IQ-matched sample of children without autism (COM: N = 36). Both HFA and COM participants displayed larger amplitude responses to error compared to correct response trials and these amplitudes did not differ by diagnostic group. For participants with HFA, larger ERN amplitudes were associated with more parent-reported autistic symptoms and more self-reported internalizing problems. However, across the full sample, larger ERN amplitudes were associated with better performance on theory of mind tasks. The results are discussed in terms of the utility of electrophysiological measures for understanding essential moderating processes that contribute to the spectrum of behavioral expression in the development of ASD.
Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun
2016-01-01
Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091
Executive function and decision-making in women with fibromyalgia.
Verdejo-García, Antonio; López-Torrecillas, Francisca; Calandre, Elena Pita; Delgado-Rodríguez, Antonia; Bechara, Antoine
2009-02-01
Patients with fibromyalgia (FM) typically report cognitive problems, and they state that these deficits are disturbing in everyday life. Despite these substantial subjective complaints by FM patients, very few studies have addressed objectively the effect of such aversive states on neuropsychological performance. In this study we aimed to examine possible impairment of executive function and decision-making in a sample of 36 women diagnosed with FM and 36 healthy women matched in age, education, and socio-economic status. We contrasted performance of both groups on two measures of executive functioning: the Wisconsin Card Sorting Test (WCST), which assesses cognitive flexibility skills, and the Iowa Gambling Tasks (IGT; original and variant versions), which assess emotion-based decision-making. We also examined the relationship between executive function performance and pain experience, and between executive function and personality traits of novelty-seeking, harm avoidance, reward dependence, and persistence (measured by the Temperament and Character Inventory-Revised). Results showed that on the WCST, FM women showed poorer performance than healthy comparison women on the number of categories and non-perseverative errors, but not on perseverative errors. FM patients also showed altered learning curve in the original IGT (where reward is immediate and punishment is delayed), suggesting compromised emotion-based decision-making; but not in the variant IGT (where punishment is immediate but reward is delayed), suggesting hypersensitivity to reward. Personality variables were very mildly associated with cognitive performance in FM women.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, X; Lin, J; Diwanji, T
2014-06-01
Purpose: Recently, template matching has been shown to be able to track tumor motion on cine-MRI images. However, artifacts such as deformation, rotation, and/or out-of-plane movement could seriously degrade the performance of this technique. In this work, we demonstrate the utility of multiple templates derived from different phases of tumor motion in reducing the negative effects of artifacts and improving the accuracy of template matching methods. Methods: Data from 2 patients with large tumors and significant tumor deformation were analyzed from a group of 12 patients from an earlier study. Cine-MRI (200 frames) imaging was performed while the patients weremore » instructed to breathe normally. Ground truth tumor position was established on each frame manually by a radiation oncologist. Tumor positions were also automatically determined using template matching with either single or multiple (5) templates. The tracking errors, defined as the absolute differences in tumor positions determined by the manual and automated methods, when using either single or multiple templates were compared in both the AP and SI directions, respectively. Results: Using multiple templates reduced the tracking error of template matching. In the SI direction where the tumor movement and deformation were significant, the mean tracking error decreased from 1.94 mm to 0.91 mm (Patient 1) and from 6.61 mm to 2.06 mm (Patient 2). In the AP direction where the tumor movement was small, the reduction of the mean tracking error was significant in Patient 1 (from 3.36 mm to 1.04 mm), but not in Patient 2 ( from 3.86 mm to 3.80 mm). Conclusion: This study shows the effectiveness of using multiple templates in improving the performance of template matching when artifacts like large tumor deformation or out-of-plane motion exists. Accurate tumor tracking capabilities can be integrated with MRI guided radiation therapy systems. This work was supported in part by grants from NIH/NCI CA 124766 and Varian Medical Systems, Palo Alto, CA.« less
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
Delayed match-to-sample early performance decrement in monkeys after $sup 60$Co irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruner, A.; Bogo, V.; Jones, R.K.
1975-07-01
Sixteen monkeys were trained on a delayed match-to-sample task (DMTS) based on shock avoidance and irradiated with single, whole-body exposures of from 396 to 2000 rad $sup 60$Co (midbody dose) at between 163 and 233 rad/min. Pre- to post-irradiation performance changes were assessed using a penalty-scaling measure which differentially weighted incorrect responses, response omissions, and error-omission sequences. Thirteen of the animals displayed early performance decrement, including five incapacitations, at lower doses (less than 1000 rad) than heretofore found effective. This was considered a function of task complexity, measurement sensitivity, and gamma effectiveness. The minimum effective midbody dose for inducing decrementmore » using the DMTS task was estimated to be on the order of 500 rad. The nature of early, transient performance decrement seems to reflect more of an inability to perform than an inability to perform correctly. (auth)« less
NASA Astrophysics Data System (ADS)
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
Decreased Leftward ‘Aiming’ Motor-Intentional Spatial Cuing in Traumatic Brain Injury
Wagner, Daymond; Eslinger, Paul J.; Barrett, A. M.
2016-01-01
Objective To characterize the mediation of attention and action in space following traumatic brain injury (TBI). Method Two exploratory analyses were performed to determine the influence of spatial ‘Aiming’ motor versus spatial ‘Where’ bias on line bisection in TBI participants. The first experiment compared performance according to severity and location of injury in TBI. The second experiment examined bisection performance in a larger TBI sample against a matched control group. In both experiments, participants bisected lines in near and far space using an apparatus that allowed for the fractionation of spatial Aiming versus Where error components. Results In the first experiment, participants with severe injuries tended to incur rightward error when starting from the right in far space, compared with participants with mild injuries. In the second experiment, when performance was examined at the individual level, more participants with TBI tended to incur rightward motor error compared to controls. Conclusions TBI may cause frontal-subcortical cognitive dysfunction and asymmetric motor perseveration, affecting spatial Aiming bias on line bisection. Potential effects on real-world function need further investigation. PMID:27571220
Designing to Control Flight Crew Errors
NASA Technical Reports Server (NTRS)
Schutte, Paul C.; Willshire, Kelli F.
1997-01-01
It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.
Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.
Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro
2016-01-25
We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.
Patton, James L; Stoykov, Mary Ellen; Kovic, Mark; Mussa-Ivaldi, Ferdinando A
2006-01-01
This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate "adaptive training." Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable "after-effect." A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion--either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.
Constructing Matching Texts in Two Languages: The Application of Propositional Analysis.
ERIC Educational Resources Information Center
Valdes, Guadalupe; And Others
1984-01-01
Discusses how current procedures for selecting/constructing equivalent texts may lead to error because of their specific limitations; proposes the utilization of micro-propositional analysis coupled with word-frequency lists and readability formulas for constructing "matching" texts; presents some procedures which researchers working in…
Extensive TD-DFT Benchmark: Singlet-Excited States of Organic Molecules.
Jacquemin, Denis; Wathelet, Valérie; Perpète, Eric A; Adamo, Carlo
2009-09-08
Extensive Time-Dependent Density Functional Theory (TD-DFT) calculations have been carried out in order to obtain a statistically meaningful analysis of the merits of a large number of functionals. To reach this goal, a very extended set of molecules (∼500 compounds, >700 excited states) covering a broad range of (bio)organic molecules and dyes have been investigated. Likewise, 29 functionals including LDA, GGA, meta-GGA, global hybrids, and long-range-corrected hybrids have been considered. Comparisons with both theoretical references and experimental measurements have been carried out. On average, the functionals providing the best match with reference data are, one the one hand, global hybrids containing between 22% and 25% of exact exchange (X3LYP, B98, PBE0, and mPW1PW91) and, on the other hand, a long-range-corrected hybrid with a less-rapidly increasing HF ratio, namely LC-ωPBE(20). Pure functionals tend to be less consistent, whereas functionals incorporating a larger fraction of exact exchange tend to underestimate significantly the transition energies. For most treated cases, the M05 and CAM-B3LYP schemes deliver fairly small deviations but do not outperform standard hybrids such as X3LYP or PBE0, at least within the vertical approximation. With the optimal functionals, one obtains mean absolute deviations smaller than 0.25 eV, though the errors significantly depend on the subset of molecules or states considered. As an illustration, PBE0 and LC-ωPBE(20) provide a mean absolute error of only 0.14 eV for the 228 states related to neutral organic dyes but are completely off target for cyanine-like derivatives. On the basis of comparisons with theoretical estimates, it also turned out that CC2 and TD-DFT errors are of the same order of magnitude, once the above-mentioned hybrids are selected.
NASA Astrophysics Data System (ADS)
Dwivedi, Prashant Povel; Kumar, Challa Sesha Sai Pavan; Choi, Hee Joo; Cha, Myoungsik
2016-02-01
Random duty-cycle error (RDE) is inherent in the fabrication of ferroelectric quasi-phase-matching (QPM) gratings. Although a small RDE may not affect the nonlinearity of QPM devices, it enhances non-phase-matched parasitic harmonic generations, limiting the device performance in some applications. Recently, we demonstrated a simple method for measuring the RDE in QPM gratings by analyzing the far-field diffraction pattern obtained by uniform illumination (Dwivedi et al. in Opt Express 21:30221-30226, 2013). In the present study, we used a Gaussian beam illumination for the diffraction experiment to measure noise spectra that are less affected by the pedestals of the strong diffraction orders. Our results were compared with our calculations based on a random grating model, demonstrating improved resolution in the RDE estimation.
Error assessment in molecular dynamics trajectories using computed NMR chemical shifts.
Koes, David R; Vries, John K
2017-01-01
Accurate chemical shifts for the atoms in molecular mechanics (MD) trajectories can be obtained from quantum mechanical (QM) calculations that depend solely on the coordinates of the atoms in the localized regions surrounding atoms of interest. If these coordinates are correct and the sample size is adequate, the ensemble average of these chemical shifts should be equal to the chemical shifts obtained from NMR spectroscopy. If this is not the case, the coordinates must be incorrect. We have utilized this fact to quantify the errors associated with the backbone atoms in MD simulations of proteins. A library of regional conformers containing 169,499 members was constructed from 6 model proteins. The chemical shifts associated with the backbone atoms in each of these conformers was obtained from QM calculations using density functional theory at the B3LYP level with a 6-311+G(2d,p) basis set. Chemical shifts were assigned to each backbone atom in each MD simulation frame using a template matching approach. The ensemble average of these chemical shifts was compared to chemical shifts from NMR spectroscopy. A large systematic error was identified that affected the 1 H atoms of the peptide bonds involved in hydrogen bonding with water molecules or peptide backbone atoms. This error was highly sensitive to changes in electrostatic parameters. Smaller errors affecting the 13 C a and 15 N atoms were also detected. We believe these errors could be useful as metrics for comparing the force-fields and parameter sets used in MD simulation because they are directly tied to errors in atomic coordinates.
Teaching Common Errors in Applying a Procedure.
ERIC Educational Resources Information Center
Marcone, Stephen; Reigeluth, Charles M.
1988-01-01
Discusses study that investigated whether or not the teaching of matched examples and nonexamples in the form of common errors could improve student performance in undergraduate music theory courses. Highlights include hypotheses tested, pretests and posttests, and suggestions for further research with different age groups. (19 references)…
Modeling for Military Operational Medicine Scientific and Technical Objectives
2005-09-01
measurements and less error in interpreting the measurements since the sensor units are placed directly under armor ; and (3) new material that matches the...more accurate measurements and less error in interpreting the measurements, since the sensor units are placed directly under armor ; and (3) new
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Author Self-disclosure Compared with Pharmaceutical Company Reporting of Physician Payments.
Alhamoud, Hani A; Dudum, Ramzi; Young, Heather A; Choi, Brian G
2016-01-01
Industry manufacturers are required by the Sunshine Act to disclose payments to physicians. These data recently became publicly available, but some manufacturers prereleased their data since 2009. We tested the hypotheses that there would be discrepancies between manufacturers' and physicians' disclosures. The financial disclosures by authors of all 39 American College of Cardiology and American Heart Association guidelines between 2009 and 2012 were matched to the public disclosures of 15 pharmaceutical companies during that same period. Duplicate authors across guidelines were assessed independently. Per the guidelines, payments <$10,000 are modest and ≥$10,000 are significant. Agreement was determined using a κ statistic; Fisher's exact and Mann-Whitney tests were used to detect statistical significance. The overall agreement between author and company disclosure was poor (κ = 0.238). There was a significant difference in error rates of disclosure among companies and authors (P = .019). Of disclosures by authors, companies failed to match them with an error rate of 71.6%. Of disclosures by companies, authors failed to match them with an error rate of 54.7%. Our analysis shows a concerning level of disagreement between guideline authors' and pharmaceutical companies' disclosures. Without ability for physicians to challenge reports, it is unclear whether these discrepancies reflect undisclosed relationships with industry or errors in reporting, and caution should be advised in interpretation of data from the Sunshine Act. Copyright © 2016 Elsevier Inc. All rights reserved.
Improving posture-motor dual-task with a supraposture-focus strategy in young and elderly adults
Yu, Shu-Han
2017-01-01
In a postural-suprapostural task, appropriate prioritization is necessary to achieve task goals and maintain postural stability. A “posture-first” principle is typically favored by elderly people in order to secure stance stability, but this comes at the cost of reduced suprapostural performance. Using a postural-suprapostural task with a motor suprapostural goal, this study investigated differences between young and older adults in dual-task cost across varying task prioritization paradigms. Eighteen healthy young (mean age: 24.8 ± 5.2 years) and 18 older (mean age: 68.8 ± 3.7 years) adults executed a designated force-matching task from a stabilometer board using either a stabilometer stance (posture-focus strategy) or force-matching (supraposture-focus strategy) as the primary task. The dual-task effect (DTE: % change in dual-task condition; positive value: dual-task benefit, negative value: dual-task cost) of force-matching error and reaction time (RT), posture error, and approximate entropy (ApEn) of stabilometer movement were measured. When using the supraposture-focus strategy, young adults exhibited larger DTE values in each behavioral parameter than when using the posture-focus strategy. The older adults using the supraposture-focus strategy also attained larger DTE values for posture error, stabilometer movement ApEn, and force-matching error than when using the posture-focus strategy. These results suggest that the supraposture-focus strategy exerted an increased dual-task benefit for posture-motor dual-tasking in both healthy young and elderly adults. The present findings imply that the older adults should make use of the supraposture-focus strategy for fall prevention during dual-task execution. PMID:28151943
Improving posture-motor dual-task with a supraposture-focus strategy in young and elderly adults.
Yu, Shu-Han; Huang, Cheng-Ya
2017-01-01
In a postural-suprapostural task, appropriate prioritization is necessary to achieve task goals and maintain postural stability. A "posture-first" principle is typically favored by elderly people in order to secure stance stability, but this comes at the cost of reduced suprapostural performance. Using a postural-suprapostural task with a motor suprapostural goal, this study investigated differences between young and older adults in dual-task cost across varying task prioritization paradigms. Eighteen healthy young (mean age: 24.8 ± 5.2 years) and 18 older (mean age: 68.8 ± 3.7 years) adults executed a designated force-matching task from a stabilometer board using either a stabilometer stance (posture-focus strategy) or force-matching (supraposture-focus strategy) as the primary task. The dual-task effect (DTE: % change in dual-task condition; positive value: dual-task benefit, negative value: dual-task cost) of force-matching error and reaction time (RT), posture error, and approximate entropy (ApEn) of stabilometer movement were measured. When using the supraposture-focus strategy, young adults exhibited larger DTE values in each behavioral parameter than when using the posture-focus strategy. The older adults using the supraposture-focus strategy also attained larger DTE values for posture error, stabilometer movement ApEn, and force-matching error than when using the posture-focus strategy. These results suggest that the supraposture-focus strategy exerted an increased dual-task benefit for posture-motor dual-tasking in both healthy young and elderly adults. The present findings imply that the older adults should make use of the supraposture-focus strategy for fall prevention during dual-task execution.
Swick, Diane; Honzel, Nikki; Larsen, Jary; Ashley, Victoria; Justus, Timothy
2012-09-01
Combat veterans with post-traumatic stress disorder (PTSD) can show impairments in executive control and increases in impulsivity. The current study examined the effects of PTSD on motor response inhibition, a key cognitive control function. A Go/NoGo task was administered to veterans with a diagnosis of PTSD based on semi-structured clinical interview using DSM-IV criteria (n = 40) and age-matched control veterans (n = 33). Participants also completed questionnaires to assess self-reported levels of PTSD and depressive symptoms. Performance measures from the patients (error rates and reaction times) were compared to those from controls. PTSD patients showed a significant deficit in response inhibition, committing more errors on NoGo trials than controls. Higher levels of PTSD and depressive symptoms were associated with higher error rates. Of the three symptom clusters, re-experiencing was the strongest predictor of performance. Because the co-morbidity of mild traumatic brain injury (mTBI) and PTSD was high in this population, secondary analyses compared veterans with PTSD+mTBI (n = 30) to veterans with PTSD only (n = 10). Although preliminary, results indicated the two patient groups did not differ on any measure (p > .88). Since cognitive impairments could hinder the effectiveness of standard PTSD therapies, incorporating treatments that strengthen executive functions might be considered in the future. (JINS, 2012, 18, 1-10).
Predicting crystalline lens fall caused by accommodation from changes in wavefront error
He, Lin; Applegate, Raymond A.
2011-01-01
PURPOSE To illustrate and develop a method for estimating crystalline lens decentration as a function of accommodative response using changes in wavefront error and show the method and limitations using previously published data (2004) from 2 iridectomized monkey eyes so that clinicians understand how spherical aberration can induce coma, in particular in intraocular lens surgery. SETTINGS College of Optometry, University of Houston, Houston, USA. DESIGN Evaluation of diagnostic test or technology. METHODS Lens decentration was estimated by displacing downward the wavefront error of the lens with respect to the limiting aperture (7.0 mm) and ocular first surface wavefront error for each accommodative response (0.00 to 11.00 diopters) until measured values of vertical coma matched previously published experimental data (2007). Lens decentration was also calculated using an approximation formula that only included spherical aberration and vertical coma. RESULTS The change in calculated vertical coma was consistent with downward lens decentration. Calculated downward lens decentration peaked at approximately 0.48 mm of vertical decentration in the right eye and approximately 0.31 mm of decentration in the left eye using all Zernike modes through the 7th radial order. Calculated lens decentration using only coma and spherical aberration formulas was peaked at approximately 0.45 mm in the right eye and approximately 0.23 mm in the left eye. CONCLUSIONS Lens fall as a function of accommodation was quantified noninvasively using changes in vertical coma driven principally by the accommodation-induced changes in spherical aberration. The newly developed method was valid for a large pupil only. PMID:21700108
Reducing patient identification errors related to glucose point-of-care testing.
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.
Reducing patient identification errors related to glucose point-of-care testing
Alreja, Gaurav; Setia, Namrata; Nichols, James; Pantanowitz, Liron
2011-01-01
Background: Patient identification (ID) errors in point-of-care testing (POCT) can cause test results to be transferred to the wrong patient's chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number) is checked against patient registration data from admission, discharge, and transfer (ADT) feeds and only matched results are transferred to the patient's electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015%) in comparison with 61.5 errors/month (0.319%) before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT. PMID:21633490
Murphy, Alistair P; Duffield, Rob; Kellett, Aaron; Reid, Machar
2016-01-01
High-performance tennis environments aim to prepare athletes for competitive demands through simulated-match scenarios and drills. With a dearth of direct comparisons between training and tournament demands, the current investigation compared the perceptual and technical characteristics of training drills, simulated match play, and tournament matches. Data were collected from 18 high-performance junior tennis players (gender: 10 male, 8 female; age 16 ± 1.1 y) during 6 ± 2 drill-based training sessions, 5 ± 2 simulated match-play sessions, and 5 ± 3 tournament matches from each participant. Tournament matches were further distinguished by win or loss and against seeded or nonseeded opponents. Notational analysis of stroke and error rates, winners, and serves, along with rating of perceived physical exertion (RPE) and mental exertion was measured postsession. Repeated-measures analyses of variance and effect-size analysis revealed that training sessions were significantly shorter in duration than tournament matches (P < .05, d = 1.18). RPEs during training and simulated match-play sessions were lower than in tournaments (P > .05; d = 1.26, d = 1.05, respectively). Mental exertion in training was lower than in both simulated match play and tournaments (P > .05; d = 1.10, d = 0.86, respectively). Stroke rates during tournaments exceeded those observed in training (P < .05, d = 3.41) and simulated-match-play (P < .05, d = 1.22) sessions. Furthermore, the serve was used more during tournaments than simulated match play (P < .05, d = 4.28), while errors and winners were similar independent of setting (P > .05, d < 0.80). Training in the form of drills or simulated match play appeared to inadequately replicate tournament demands in this cohort of players. Coaches should be mindful of match demands to best prescribe sessions of relevant duration, as well as internal (RPE) and technical (stroke rate) load, to aid tournament preparation.
EEG Frequency Changes Prior to Making Errors in an Easy Stroop Task
Atchley, Rachel; Klee, Daniel; Oken, Barry
2017-01-01
Background: Mind-wandering is a form of off-task attention that has been associated with negative affect and rumination. The goal of this study was to assess potential electroencephalographic markers of task-unrelated thought, or mind-wandering state, as related to error rates during a specialized cognitive task. We used EEG to record frontal frequency band activity while participants completed a Stroop task that was modified to induce boredom, task-unrelated thought, and therefore mind-wandering. Methods: A convenience sample of 27 older adults (50–80 years) completed a computerized Stroop matching task. Half of the Stroop trials were congruent (word/color match), and the other half were incongruent (mismatched). Behavioral data and EEG recordings were assessed. EEG analysis focused on the 1-s epochs prior to stimulus presentation in order to compare trials followed by correct versus incorrect responses. Results: Participants made errors on 9% of incongruent trials. There were no errors on congruent trials. There was a decrease in alpha and theta band activity during the epochs followed by error responses. Conclusion: Although replication of these results is necessary, these findings suggest that potential mind-wandering, as evidenced by errors, can be characterized by a decrease in alpha and theta activity compared to on-task, accurate performance periods. PMID:29163101
Gong, Feilong; Li, Peng; Li, Bin; Zhang, Shizhen; Zhang, Xinjie; Yang, Sen; Liu, Hongbin; Wang, Wei
2018-02-01
OBJECTIVE Anterior capsulotomy (AC) is sometimes used as a last resort for treatment-refractory obsessive-compulsive disorder (OCD). Previous studies assessing neuropsychological outcomes in patients with OCD have identified several forms of cognitive dysfunction that are associated with the disease, but few have focused on changes in cognitive function in OCD patients who have undergone surgery. In the present study, the authors investigated the effects of AC on the cognitive function of patients with treatment-refractory OCD. METHODS The authors selected 14 patients with treatment-refractory OCD who had undergone bilateral AC between 2007 and 2013, 14 nonsurgically treated OCD patients, and 14 healthy control subjects for this study. The 3 groups were matched for sex, age, and education. Several neuropsychological tests, including Similarities and Block Design, which are subsets of the Wechsler Abbreviated Scale of Intelligence; Immediate and Delayed Logical Memory and Immediate and Delayed Visual Reproduction, which are subsets of the Wechsler Memory Scale-Revised; and Corrects, Categories, Perseverative Errors, Nonperseverative Errors, and Errors, subtests of the Wisconsin Card Sorting Test, were conducted in all 42 subjects at baseline and after AC, after nonsurgical treatment, or at 6-month intervals, as appropriate. The Yale-Brown Obsessive-Compulsive Scale (Y-BOCS) was used to measure OCD symptoms in all 28 OCD patients. RESULTS The Y-BOCS scores decreased significantly in both OCD groups during the 12-month follow-up period. Surgical patients showed higher levels of improvement in verbal memory, visual memory, visuospatial skills, and executive function than the nonsurgically treated OCD patients. CONCLUSIONS The findings of this study suggest that AC not only reduces OCD symptoms but also attenuates moderate cognitive deficits.
Individually identifiable body odors are produced by the gorilla and discriminated by humans.
Hepper, Peter G; Wells, Deborah L
2010-05-01
Many species produce odor cues that enable them to be identified individually, as well as providing other socially relevant information. Study of the role of odor cues in the social behavior of great apes is noticeable by its absence. Olfaction has been viewed as having little role in guiding behavior in these species. This study examined whether Western lowland gorillas produce an individually identifiable odor. Odor samples were obtained by placing cloths in the gorilla's den. A delayed matching to sample task was used with human participants (n = 100) to see if they were able to correctly match a target odor sample to a choice of either: 2 odors (the target sample and another, Experiment 1) and 6 odors (the target sample and 5 others, Experiment 2). Participants were correctly able to identify the target odor when given either 2 or 6 matches. Subjects made fewest errors when matching the odor of the silverback, whereas matching the odors of the young gorillas produced most errors. The results indicate that gorillas do produce individually identifiable body odors and introduce the possibility that odor cues may play a role in gorilla social behavior.
Security and matching of partial fingerprint recognition systems
NASA Astrophysics Data System (ADS)
Jea, Tsai-Yang; Chavan, Viraj S.; Govindaraju, Venu; Schneider, John K.
2004-08-01
Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002"s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).
Matching factorization theorems with an inverse-error weighting
NASA Astrophysics Data System (ADS)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea
2018-06-01
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.
Matching factorization theorems with an inverse-error weighting
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...
2018-04-03
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
Matching factorization theorems with an inverse-error weighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe
We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less
Golz, Jürgen; MacLeod, Donald I A
2003-05-01
We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.
Illusory conjunctions reflect the time course of the attentional blink.
Botella, Juan; Privado, Jesús; de Liaño, Beatriz Gil-Gómez; Suero, Manuel
2011-07-01
Illusory conjunctions in the time domain are binding errors for features from stimuli presented sequentially but in the same spatial position. A similar experimental paradigm is employed for the attentional blink (AB), an impairment of performance for the second of two targets when it is presented 200-500 msec after the first target. The analysis of errors along the time course of the AB allows the testing of models of illusory conjunctions. In an experiment, observers identified one (control condition) or two (experimental condition) letters in a specified color, so that illusory conjunctions in each response could be linked to specific positions in the series. Two items in the target colors (red and white, embedded in distractors of different colors) were employed in four conditions defined according to whether both targets were in the same or different colors. Besides the U-shaped function for hits, the errors were analyzed by calculating several response parameters reflecting characteristics such as the average position of the responses or the attentional suppression during the blink. The several error parameters cluster in two time courses, as would be expected from prevailing models of the AB. Furthermore, the results match the predictions from Botella, Barriopedro, and Suero's (Journal of Experimental Psychology: Human Perception and Performance, 27, 1452-1467, 2001) model for illusory conjunctions.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
Walton, Courtney C; Shine, James M; Mowszowski, Loren; Gilat, Moran; Hall, Julie M; O'Callaghan, Claire; Naismith, Sharon L; Lewis, Simon J G
2015-05-01
Freezing of gait is a frequent and disabling symptom experienced by many patients with Parkinson's disease. A number of executive deficits have been shown to be associated with the phenomenon suggesting a common underlying pathophysiology, which as of yet remains unclear. Neuroimaging studies have also implicated the role of the cognitive control network in patients with freezing. To explore this concept, the current study examined error-monitoring as a measure of cognitive control. Thirty-four patients with and 38 without freezing of gait, who were otherwise well matched on disease severity, completed a colour-word interference task that allowed the specific assessment of error monitoring during conflict. Whilst both groups performed colour-naming and word-reading tasks equally well, those patients with freezing showed a pattern between conditions whereby they were better able to monitor performance and self-correct errors in the pure inhibition task but not after a switching rule was introduced. The novel results shown here provide insight into possible pathophysiological mechanisms involved in cognitive load and error monitoring in patients with freezing of gait. These results provide further evidence for the role of functional frontostriatal circuitry impairments in patients with freezing of gait and have implications for future studies and possible therapeutic interventions.
Inducing Speech Errors in Dysarthria Using Tongue Twisters
ERIC Educational Resources Information Center
Kember, Heather; Connaghan, Kathryn; Patel, Rupal
2017-01-01
Although tongue twisters have been widely use to study speech production in healthy speakers, few studies have employed this methodology for individuals with speech impairment. The present study compared tongue twister errors produced by adults with dysarthria and age-matched healthy controls. Eight speakers (four female, four male; mean age =…
Action Monitoring and Perfectionism in Anorexia Nervosa
ERIC Educational Resources Information Center
Pieters, Guido L. M.; de Bruijn, Ellen R. A.; Maas, Yvonne; Hulstijn, Wouter; Vandereycken, Walter; Peuskens, Joseph; Sabbe, Bernard G.
2007-01-01
To study action monitoring in anorexia nervosa, behavioral and EEG measures were obtained in underweight anorexia nervosa patients (n=17) and matched healthy controls (n=19) while performing a speeded choice-reaction task. Our main measures of interest were questionnaire outcomes, reaction times, error rates, and the error-related negativity ERP…
Counting OCR errors in typeset text
NASA Astrophysics Data System (ADS)
Sandberg, Jonathan S.
1995-03-01
Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.
Sleep deprivation impairs inhibitory control during wakefulness in adult sleepwalkers.
Labelle, Marc-Antoine; Dang-Vu, Thien Thanh; Petit, Dominique; Desautels, Alex; Montplaisir, Jacques; Zadra, Antonio
2015-12-01
Sleepwalkers often complain of excessive daytime somnolence. Although excessive daytime somnolence has been associated with cognitive impairment in several sleep disorders, very few data exist concerning sleepwalking. This study aimed to investigate daytime cognitive functioning in adults diagnosed with idiopathic sleepwalking. Fifteen sleepwalkers and 15 matched controls were administered the Continuous Performance Test and Stroop Colour-Word Test in the morning after an overnight polysomnographic assessment. Participants were tested a week later on the same neuropsychological battery, but after 25 h of sleep deprivation, a procedure known to precipitate sleepwalking episodes during subsequent recovery sleep. There were no significant differences between sleepwalkers and controls on any of the cognitive tests administered under normal waking conditions. Testing following sleep deprivation revealed significant impairment in sleepwalkers' executive functions related to inhibitory control, as they made more errors than controls on the Stroop Colour-Word Test and more commission errors on the Continuous Performance Test. Sleepwalkers' scores on measures of executive functions were not associated with self-reported sleepiness or indices of sleep fragmentation from baseline polysomnographic recordings. The results support the idea that sleepwalking involves daytime consequences and suggest that these may also include cognitive impairments in the form of disrupted inhibitory control following sleep deprivation. These disruptions may represent a daytime expression of sleepwalking's pathophysiological mechanisms. © 2015 European Sleep Research Society.
Study of chromatic adaptation using memory color matches, Part I: neutral illuminants.
Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter
2017-04-03
Twelve corresponding color data sets have been obtained using the long-term memory colors of familiar objects as target stimuli. Data were collected for familiar objects with neutral, red, yellow, green and blue hues under 4 approximately neutral illumination conditions on or near the blackbody locus. The advantages of the memory color matching method are discussed in light of other more traditional asymmetric matching techniques. Results were compared to eight corresponding color data sets available in literature. The corresponding color data was used to test several linear (von Kries, RLAB, etc.) and nonlinear (Hunt & Nayatani) chromatic adaptation transforms (CAT). It was found that a simple two-step von Kries, whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors, outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color sets. The predictive errors were substantially smaller than the standard uncertainty on the average observer and were comparable to what are considered just-noticeable-differences in the CIE u'v' chromaticity diagram, supporting the use of memory color based internal references to study chromatic adaptation mechanisms.
Kager, Simone; Budhota, Aamani; Deshmukh, Vishwanath A.; Kuah, Christopher W. K.; Yam, Lester H. L.; Xiang, Liming; Chua, Karen S. G.; Masia, Lorenzo; Campolo, Domenico
2017-01-01
Proprioception is a critical component for motor functions and directly affects motor learning after neurological injuries. Conventional methods for its assessment are generally ordinal in nature and hence lack sensitivity. Robotic devices designed to promote sensorimotor learning can potentially provide quantitative precise, accurate, and reliable assessments of sensory impairments. In this paper, we investigate the clinical applicability and validity of using a planar 2 degrees of freedom robot to quantitatively assess proprioceptive deficits in post-stroke participants. Nine stroke survivors and nine healthy subjects participated in the study. Participants’ hand was passively moved to the target position guided by the H-Man robot (Criterion movement) and were asked to indicate during a second passive movement towards the same target (Matching movement) when they felt that they matched the target position. The assessment was carried out on a planar surface for movements in the forward and oblique directions in the contralateral and ipsilateral sides of the tested arm. The matching performance was evaluated in terms of error magnitude (absolute and signed) and its variability. Stroke patients showed higher variability in the estimation of the target position compared to the healthy participants. Further, an effect of target was found, with lower absolute errors in the contralateral side. Pairwise comparison between individual stroke participant and control participants showed significant proprioceptive deficits in two patients. The proposed assessment of passive joint position sense was inherently simple and all participants, regardless of motor impairment level, could complete it in less than 10 minutes. Therefore, the method can potentially be carried out to detect changes in proprioceptive deficits in clinical settings. PMID:29161264
Performance on a strategy set shifting task in rats following adult or adolescent cocaine exposure
Kantak, Kathleen M.; Barlow, Nicole; Tassin, David H.; Brisotti, Madeline F.; Jordan, Chloe J
2014-01-01
Rationale Neuropsychological testing is widespread in adult cocaine abusers, but lacking in teens. Animal models may provide insight into age-related neuropsychological consequences of cocaine exposure. Objectives Determine whether developmental plasticity protects or hinders behavioral flexibility after cocaine exposure in adolescent vs. adult rats. Methods Using a yoked-triad design, one rat controlled cocaine delivery and the other two passively received cocaine or saline. Rats controlling cocaine delivery (1.0 mg/kg) self-administered for 18 sessions (starting P37 or P77), followed by 18 drug-free days. Rats next were tested in a strategy set shifting task, lasting 11–13 sessions. Results Cocaine self-administration did not differ between age groups. During initial set formation, adolescent-onset groups required more trials to reach criterion and made more errors than adult-onset groups. During the set shift phase, rats with adult-onset cocaine self-administration experience had higher proportions of correct trials and fewer perseverative + regressive errors than age-matched yoked-controls or rats with adolescent-onset cocaine self-administration experience. During reversal learning, rats with adult-onset cocaine experience (self-administered or passive) required fewer trials to reach criterion and the self-administering rats made fewer perseverative + regressive errors than yoked-saline rats. Rats receiving adolescent-onset yoked-cocaine had more trial omissions and longer lever press reaction times than age-matched rats self-administering cocaine or receiving yoked-saline. Conclusions Prior cocaine self-administration may impair memory to reduce proactive interference during set shifting and reversal learning in adult-onset but not adolescent-onset rats (developmental plasticity protective). Passive cocaine may disrupt aspects of executive function in adolescent-onset but not adult-onset rats (developmental plasticity hinders). PMID:24800898
Systematic changes in position sense accompany normal aging across adulthood.
Herter, Troy M; Scott, Stephen H; Dukelow, Sean P
2014-03-25
Development of clinical neurological assessments aimed at separating normal from abnormal capabilities requires a comprehensive understanding of how basic neurological functions change (or do not change) with increasing age across adulthood. In the case of proprioception, the research literature has failed to conclusively determine whether or not position sense in the upper limb deteriorates in elderly individuals. The present study was conducted a) to quantify whether upper limb position sense deteriorates with increasing age, and b) to generate a set of normative data that can be used for future comparisons with clinical populations. We examined position sense in 209 healthy males and females between the ages of 18 and 90 using a robotic arm position-matching task that is both objective and reliable. In this task, the robot moved an arm to one of nine positions and subjects attempted to mirror-match that position with the opposite limb. Measures of position sense were recorded by the robotic apparatus in hand-and joint-based coordinates, and linear regressions were used to quantify age-related changes and percentile boundaries of normal behaviour. For clinical comparisons, we also examined influences of sex (male versus female) and test-hand (dominant versus non-dominant) on all measures of position sense. Analyses of hand-based parameters identified several measures of position sense (Variability, Shift, Spatial Contraction, Absolute Error) with significant effects of age, sex, and test-hand. Joint-based parameters at the shoulder (Absolute Error) and elbow (Variability, Shift, Absolute Error) also exhibited significant effects of age and test-hand. The present study provides strong evidence that several measures of upper extremity position sense exhibit declines with age. Furthermore, this data provides a basis for quantifying when changes in position sense are related to normal aging or alternatively, pathology.
Passport Officers’ Errors in Face Matching
White, David; Kemp, Richard I.; Jenkins, Rob; Matheson, Michael; Burton, A. Mike
2014-01-01
Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of ‘fraudulent’ photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately – though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection. PMID:25133682
Passport officers' errors in face matching.
White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike
2014-01-01
Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.
Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network
NASA Astrophysics Data System (ADS)
Sang, Nong; Zhang, Tianxu
1997-12-01
Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.
Hybrid Correlation Algorithms. A Bridge Between Feature Matching and Image Correlation,
1979-11-01
spa- tially into groups of pixels. The intensity level preprocessing is designed to compensate for any biases or gain changes in the system ; whereas...number of error sources that affect the performance of the system . It would be desirable to lump these errors into ge- neric categories in discussing... system performance rather than treat- ing each error source separately. Such a generic categorization should possess the following properties: 1. The
VizieR Online Data Catalog: Vela Junior (RX J0852.0-4622) HESS image (HESS+, 2018)
NASA Astrophysics Data System (ADS)
H. E. S. S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Andersson, T.; Anguener, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernloehr, K.; Blackwell, R.; Boettcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Buechele, M.; Bulik, T.; Capasso, M.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Chretien, M.; Coffaro, M.; Colafrancesco, S.; Cologna, G.; Condon, B.; Conrad, J.; Cui, Y.; Davids, I. D.; Decock, J.; Degrange, B.; Deil, C.; Devin, J.; Dewilt, P.; Dirson, L.; Djannati-Atai, A.; Domainko, W.; Donath, A.; Drury, L. O'c.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Foerster, A.; Funk, S.; Fuessling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jogler, T.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzynski, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khelifi, B.; Kieffer, M.; King, J.; Klepser, S.; Klochkov, D.; Kluzniak, W.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krueger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lefranc, V.; Lemiere, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; Lopez-Coto, R.; Lypova, I.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Mora, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Niederwanger, F.; Niemiec J.; Oakes, L.; O'Brien, P.; Odaka, H.; Oettl, S.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poon, H.; Prokhorov, D.; Prokoph, H.; Puehlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reimer, A.; Reimer, O.; Renaud, M.; de Los Reyes, R.; Richter, S.; Rieger, F.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schuessler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, L.; Steenkamp, R.; Stegmann, C.; Stycz, K.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der, Walt D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Voelk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Woernlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zywucka, N.
2018-03-01
skymap.fit: H.E.S.S. excess skymap in FITS format of the region comprising Vela Junior and its surroundings. The excess map has been corrected for the gradient of exposure and smoothed with a Gaussian function of width 0.08° to match the analysis point spread function, matching the procedure applied to derive the maps in Fig. 1. sp_stat.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent statistical uncertainties at 1 sigma confidence level. The covariance matrix of the fit is also included in the format: c11 c12 c_13 c21 c22 c_23 c31 c32 c_33 where the subindices represent the following parameters of the power-law with exponential cut-off (ECPL) formula in Tab. 2: 1: flux normalization (Phi0) 2: spectral index (Gamma) 3: inverse of the cutoff energy (lambda=1/Ecut) The units for the covariance matrix are the same as for the fit parameters. Notice that, while the fit parameters section of the file shows E_cut as parameter, the fit was done in lambda=1/Ecut; hence the covariance matrix shows the values for lambda in TeV-1. sp_syst.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent systematic uncertainties at 1 sigma confidence level. The integral fluxes for several energy ranges are also included. (4 data files).
Sexing California Clapper Rails using morphological measurements
Overton, Cory T.; Casazza, Michael L.; Takekawa, John Y.; Rohmer, Tobias M.
2009-01-01
California Clapper Rails (Rallus longirostris obsoletus) have monomorphic plumage, a trait that makes identification of sex difficult without extensive behavioral observation or genetic testing. Using 31 Clapper Rails (22 females, 9 males), caught in south San Francisco Bay, CA, and using easily measurable morphological characteristics, we developed a discriminant function to distinguish sex. We then validated this function on 33 additional rails. Seven morphological measurements were considered, resulting in three which were selected in the discriminate function: culmen length, tarsometatarsus length, and flat wing length. We had no classification errors for the development or testing datasets either with resubstitution or cross-validation procedures. Male California Clapper Rails were 6-22% larger than females for individual morphological traits, and the largest difference was in body mass. Variables in our discriminant function closely match variables developed for sexing Clapper Rails of Gulf Coast populations. However, a universal discriminant function to sex all Clapper Rail subspecies is not likely because of large and inconsistent differences in morphological traits among subspecies.
Signal-Detection Analyses of Conditional Discrimination and Delayed Matching-to-Sample Performance
ERIC Educational Resources Information Center
Alsop, Brent
2004-01-01
Quantitative analyses of stimulus control and reinforcer control in conditional discriminations and delayed matching-to-sample procedures often encounter a problem; it is not clear how to analyze data when subjects have not made errors. The present article examines two common methods for overcoming this problem. Monte Carlo simulations of…
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
Just, Beth Haenke; Marc, David; Munns, Megan; Sandefer, Ryan
2016-01-01
Patient identification matching problems are a major contributor to data integrity issues within electronic health records. These issues impede the improvement of healthcare quality through health information exchange and care coordination, and contribute to deaths resulting from medical errors. Despite best practices in the area of patient access and medical record management to avoid duplicating patient records, duplicate records continue to be a significant problem in healthcare. This study examined the underlying causes of duplicate records using a multisite data set of 398,939 patient records with confirmed duplicates and analyzed multiple reasons for data discrepancies between those record matches. The field that had the greatest proportion of mismatches (nondefault values) was the middle name, accounting for 58.30 percent of mismatches. The Social Security number was the second most frequent mismatch, occurring in 53.54 percent of the duplicate pairs. The majority of the mismatches in the name fields were the result of misspellings (53.14 percent in first name and 33.62 percent in last name) or swapped last name/first name, first name/middle name, or last name/middle name pairs. The use of more sophisticated technologies is critical to improving patient matching. However, no amount of advanced technology or increased data capture will completely eliminate human errors. Thus, the establishment of policies and procedures (such as standard naming conventions or search routines) for front-end and back-end staff to follow is foundational for the overall data integrity process. Training staff on standard policies and procedures will result in fewer duplicates created on the front end and more accurate duplicate record matching and merging on the back end. Furthermore, monitoring, analyzing trends, and identifying errors that occur are proactive ways to identify data integrity issues. PMID:27134610
A graph theoretic approach to scene matching
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1991-01-01
The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.
Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors
ERIC Educational Resources Information Center
Mitchell, Colter
2010-01-01
Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…
Phonological Spelling and Reading Deficits in Children with Spelling Disabilities
ERIC Educational Resources Information Center
Friend, Angela; Olson, Richard K.
2008-01-01
Spelling errors in the Wide Range Achievement Test were analyzed for 77 pairs of children, each of which included one older child with spelling disability (SD) and one spelling-level-matched younger child with normal spelling ability from the Colorado Learning Disabilities Research Center database. Spelling error analysis consisted of a percent…
Sentence Recall by Children With SLI Across Two Nonmainstream Dialects of English
McDonald, Janet L.; Seidel, Christy M.; Hegarty, Michael
2016-01-01
Purpose The inability to accurately recall sentences has proven to be a clinical marker of specific language impairment (SLI); this task yields moderate-to-high levels of sensitivity and specificity. However, it is not yet known if these results hold for speakers of dialects whose nonmainstream grammatical productions overlap with those that are produced at high rates by children with SLI. Method Using matched groups of 70 African American English speakers and 36 Southern White English speakers and dialect-strategic scoring, we examined children's sentence recall abilities as a function of their dialect and clinical status (SLI vs. typically developing [TD]). Results For both dialects, the SLI group earned lower sentence recall scores than the TD group with sensitivity and specificity values ranging from .80 to .94, depending on the analysis. Children with SLI, as compared with TD controls, manifested lower levels of verbatim recall, more ungrammatical recalls when the recall was not exact, and higher levels of error on targeted functional categories, especially those marking tense. Conclusion When matched groups are examined and dialect-strategic scoring is used, sentence recall yields moderate-to-high levels of diagnostic accuracy to identify SLI within speakers of nonmainstream dialects of English. PMID:26501934
de Jonge, Maretha; Kemner, Chantal; Naber, Fabienne; van Engeland, Herman
2009-04-01
Superior performance on block design tasks is reported in autistic individuals, although it is not consistently found in high-functioning individuals or individuals with Asperger Syndrome. It is assumed to reflect weak central coherence: an underlying cognitive deficit, which might also be part of the genetic makeup of the disorder. We assessed block design reconstruction skills in high-functioning individuals with autism spectrum disorders (ASD) from multi-incidence families and in their parents. Performance was compared to relevant matched control groups. We used a task that was assumed to be highly sensitive to subtle performance differences. We did not find individuals with ASD to be significantly faster on this task than the matched control group, not even when the difference between reconstruction time of segmented and pre-segmented designs was compared. However, we found individuals with ASD to make fewer errors during the process of reconstruction which might indicate some dexterity in mental segmentation. However, parents of individuals with ASD did not perform better on the task than control parents. Therefore, based on our data, we conclude that mental segmentation ability as measured with a block design reconstruction task is not a neurocognitive marker or endophenotype useful in genetic studies.
Nikdel, Ali; Braatz, Richard D; Budman, Hector M
2018-05-01
Dynamic flux balance analysis (DFBA) has become an instrumental modeling tool for describing the dynamic behavior of bioprocesses. DFBA involves the maximization of a biologically meaningful objective subject to kinetic constraints on the rate of consumption/production of metabolites. In this paper, we propose a systematic data-based approach for finding both the biological objective function and a minimum set of active constraints necessary for matching the model predictions to the experimental data. The proposed algorithm accounts for the errors in the experiments and eliminates the need for ad hoc choices of objective function and constraints as done in previous studies. The method is illustrated for two cases: (1) for in silico (simulated) data generated by a mathematical model for Escherichia coli and (2) for actual experimental data collected from the batch fermentation of Bordetella Pertussis (whooping cough).
Design of pulse waveform for waveform division multiple access UWB wireless communication system.
Yin, Zhendong; Wang, Zhirui; Liu, Xiaohui; Wu, Zhilu
2014-01-01
A new multiple access scheme, Waveform Division Multiple Access (WDMA) based on the orthogonal wavelet function, is presented. After studying the correlation properties of different categories of single wavelet functions, the one with the best correlation property will be chosen as the foundation for combined waveform. In the communication system, each user is assigned to different combined orthogonal waveform. Demonstrated by simulation, combined waveform is more suitable than single wavelet function to be a communication medium in WDMA system. Due to the excellent orthogonality, the bit error rate (BER) of multiuser with combined waveforms is so close to that of single user in a synchronous system. That is to say, the multiple access interference (MAI) is almost eliminated. Furthermore, even in an asynchronous system without multiuser detection after matched filters, the result is still pretty ideal and satisfactory by using the third combination mode that will be mentioned in the study.
Systems, methods and apparatus for pattern matching in procedure development and verification
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which, in some embodiments, a formal specification is pattern-matched from scenarios, the formal specification is analyzed, and flaws in the formal specification are corrected. The systems, methods and apparatus may include pattern-matching an equivalent formal model from an informal specification. Such a model can be analyzed for contradictions, conflicts, use of resources before the resources are available, competition for resources, and so forth. From such a formal model, an implementation can be automatically generated in a variety of notations. The approach can improve the resulting implementation, which, in some embodiments, is provably equivalent to the procedures described at the outset, which in turn can improve confidence that the system reflects the requirements, and in turn reduces system development time and reduces the amount of testing required of a new system. Moreover, in some embodiments, two or more implementations can be "reversed" to appropriate formal models, the models can be combined, and the resulting combination checked for conflicts. Then, the combined, error-free model can be used to generate a new (single) implementation that combines the functionality of the original separate implementations, and may be more likely to be correct.
Rasch Analysis of the Student Refractive Error and Eyeglass Questionnaire
Crescioni, Mabel; Messer, Dawn H.; Warholak, Terri L.; Miller, Joseph M.; Twelker, J. Daniel; Harvey, Erin M.
2014-01-01
Purpose To evaluate and refine a newly developed instrument, the Student Refractive Error and Eyeglasses Questionnaire (SREEQ), designed to measure the impact of uncorrected and corrected refractive error on vision-related quality of life (VRQoL) in school-aged children. Methods. A 38 statement instrument consisting of two parts was developed: Part A relates to perceptions regarding uncorrected vision and Part B relates to perceptions regarding corrected vision and includes other statements regarding VRQoL with spectacle correction. The SREEQ was administered to 200 Native American 6th through 12th grade students known to have previously worn and who currently require eyeglasses. Rasch analysis was conducted to evaluate the functioning of the SREEQ. Statements on Part A and Part B were analyzed to examine the dimensionality and constructs of the questionnaire, how well the items functioned, and the appropriateness of the response scale used. Results Rasch analysis suggested two items be eliminated and the measurement scale for matching items be reduced from a 4-point response scale to a 3-point response scale. With these modifications, categorical data were converted to interval level data, to conduct an item and person analysis. A shortened version of the SREEQ was constructed with these modifications, the SREEQ-R, which included the statements that were able to capture changes in VRQoL associated with spectacle wear for those with significant refractive error in our study population. Conclusions While the SREEQ Part B appears to be a have less than optimal reliability to assess the impact of spectacle correction on VRQoL in our student population, it is also able to detect statistically significant differences from pretest to posttest on both the group and individual levels to show that the instrument can assess the impact that glasses have on VRQoL. Further modifications to the questionnaire, such as those included in the SREEQ-R, could enhance its functionality. PMID:24811844
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
Divided attention and driving: a pilot study using virtual reality technology.
Lengenfelder, Jean; Schultheis, Maria T; Al-Shihabi, Talal; Mourant, Ronald; DeLuca, John
2002-02-01
Virtual reality (VR) was used to investigate the influence of divided attention (simple versus complex) on driving performance (speed control). Three individuals with traumatic brain injury (TBI) and three healthy controls (HC), matched for age, education, and gender, were examined. Preliminary results revealed no differences on driving speed between TBI and HC. In contrast, TBI subjects demonstrated a greater number of errors on a secondary task performed while driving. The findings suggest that VR may provide an innovative medium for direct evaluation of basic cognitive functions (ie, divided attention) and its impact on everyday tasks (ie, driving) not previously available through traditional neuropsychological measures.
Information for Successful Interaction with Autonomous Systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Johnson, Kathy A.
2003-01-01
Interaction in heterogeneous mission operations teams is not well matched to classical models of coordination with autonomous systems. We describe methods of loose coordination and information management in mission operations. We describe an information agent and information management tool suite for managing information from many sources, including autonomous agents. We present an integrated model of levels of complexity of agent and human behavior, which shows types of information processing and points of potential error in agent activities. We discuss the types of information needed for diagnosing problems and planning interactions with an autonomous system. We discuss types of coordination for which designs are needed for autonomous system functions.
Spline curve matching with sparse knot sets
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2004-01-01
This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...
Toward Tense as a Clinical Marker of Specific Language Impairment in English-Speaking Children.
ERIC Educational Resources Information Center
Rice, Mabel L.; Wexler, Kenneth
1996-01-01
Comparison of the speech of 37 preschool children with speech-language impairment (SLI), 40 language-matched children, and 45 age-matched children found that errors in a set of morphemes marking tense characterized the SLI children. Evidence supporting the use of these morphemes as clinical markers for SLI is offered. (DB)
NASA Astrophysics Data System (ADS)
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy; Siroux, Valérie
2013-09-01
Errors in address geocodes may affect estimates of the effects of air pollution on health. We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant's address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: -0.56, -6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: -0.14, -3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted -2.81 (95% CI: -0.26, -5.35) using NavTEQ, or 2.08 (95% CI -4.63, 0.47, p = 0.11) using Google Maps]. Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
...) defines ``Bona Fide Error'' as: (1) The inaccurate conveyance or execution of any term of an order... Participant that submitted the order to the Matching System or the customer of the Participant that submitted... the execution of customer orders that have been placed in error, provided that the following...
ERIC Educational Resources Information Center
Worth, Sarah
2003-01-01
This study compared responses of 16 pupils either with or without autistic spectrum disorder (ASD) and matched for gender and verbal ability. Subjects' responses to various pictures were categorized. Results suggested errors made by the two groups differed both quantitatively and qualitatively. Errors made by pupils with ASD were largely…
Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization
ERIC Educational Resources Information Center
Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.
2006-01-01
Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…
Influence of Additive and Multiplicative Structure and Direction of Comparison on the Reversal Error
ERIC Educational Resources Information Center
González-Calero, José Antonio; Arnau, David; Laserna-Belenguer, Belén
2015-01-01
An empirical study has been carried out to evaluate the potential of word order matching and static comparison as explanatory models of reversal error. Data was collected from 214 undergraduate students who translated a set of additive and multiplicative comparisons expressed in Spanish into algebraic language. In these multiplicative comparisons…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... reviews and tenant interviews that cannot be accomplished with remote monitoring or HUD data systems. This... errors. On-site tenant interviews, file reviews, third-party income verifications, and income matching... completed in 1996 and found that about one- half of the errors measured using on-site tenant interviews and...
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
Sentence imitation as a marker of SLI in Czech: disproportionate impairment of verbs and clitics.
Smolík, Filip; Vávru, Petra
2014-06-01
The authors examined sentence imitation as a potential clinical marker of specific language impairment (SLI) in Czech and its use to identify grammatical markers of SLI. Children with SLI and the age- and language-matched control groups (total N = 57) were presented with a sentence imitation task, a receptive vocabulary task, and digit span and nonword repetition tasks. Sentence imitations were scored for accuracy and error types. A separate count of inaccuracies for individual part-of-speech categories was performed. Children with SLI had substantially more inaccurate imitations than the control groups. The differences in the memory measures could not account for the differences between children with SLI and the control groups in imitation accuracy, even though they accounted for the differences between the language-matched and age-matched control groups. The proportion of grammatical errors was larger in children with SLI than in the control groups. The categories that were most affected in imitations of children with SLI were verbs and clitics. Sentence imitation is a sensitive marker of SLI. Verbs and clitics are the most vulnerable categories in Czech SLI. The pattern of errors suggests that impaired syntactic representations are the most likely source of difficulties in children with SLI.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking
Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.
2014-01-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438
Muscle synergies may improve optimization prediction of knee contact forces during walking.
Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J
2014-02-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.
[A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].
Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki
2016-03-01
The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.
Reading difficulties in Albanian.
Avdyli, Rrezarta; Cuetos, Fernando
2012-10-01
Albanian is an Indo-European language with a shallow orthography, in which there is an absolute correspondence between graphemes and phonemes. We aimed to know reading strategies used by Albanian disabled children during word and pseudoword reading. A pool of 114 Kosovar reading disabled children matched with 150 normal readers aged 6 to 11 years old were tested. They had to read 120 stimuli varied in lexicality, frequency, and length. The results in terms of reading accuracy as well as in reading times show that both groups were affected by lexicality and length effects. In both groups, length and lexicality effects were significantly modulated by school year being greater in early grades and later diminish in length and just the opposite in lexicality. However, the reading difficulties group was less accurate and slower than the control group across all school grades. Analyses of the error patterns showed that phonological errors, when the letter replacement leading to new nonwords, are the most common error type in both groups, although as grade rises, visual errors and lexicalizations increased more in the control group than the reading difficulties group. These findings suggest that Albanian normal children use both routes (lexical and sublexical) from the beginning of reading despite of the complete regularity of Albanian, while children with reading difficulties start using sublexical reading and the lexical reading takes more time to acquire, but finally both routes are functional.
Fault isolation through no-overhead link level CRC
Chen, Dong; Coteus, Paul W.; Gara, Alan G.
2007-04-24
A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.
Moors, Pieter
2015-01-01
In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.
Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy
2010-12-01
In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.
Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.
Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo
2016-05-04
Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.
NASA Astrophysics Data System (ADS)
Xu, Xin; Zhang, Qingsong; Muller, Richard P.; Goddard, William A.
2005-01-01
We derive here the form for the exact exchange energy density for a density that decays with Gaussian-type behavior at long range. This functional is intermediate between the B88 and the PW91 exchange functionals. Using this modified functional to match the form expected for Gaussian densities, we propose the X3LYP extended functional. We find that X3LYP significantly outperforms Becke three parameter Lee-Yang-Parr (B3LYP) for describing van der Waals and hydrogen bond interactions, while performing slightly better than B3LYP for predicting heats of formation, ionization potentials, electron affinities, proton affinities, and total atomic energies as validated with the extended G2 set of atoms and molecules. Thus X3LYP greatly enlarges the field of applications for density functional theory. In particular the success of X3LYP in describing the water dimer (with Re and De within the error bars of the most accurate determinations) makes it an excellent candidate for predicting accurate ligand-protein and ligand-DNA interactions.
NASA Astrophysics Data System (ADS)
Kukkonen, M.; Maltamo, M.; Packalen, P.
2017-08-01
Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.
Comparing Four Age Model Techniques using Nine Sediment Cores from the Iberian Margin
NASA Astrophysics Data System (ADS)
Lisiecki, L. E.; Jones, A. M.; Lawrence, C.
2017-12-01
Interpretations of paleoclimate records from ocean sediment cores rely on age models, which provide estimates of age as a function of core depth. Here we compare four methods used to generate age models for sediment cores for the past 140 kyr. The first method is based on radiocarbon dating using the Bayesian statistical software, Bacon [Blaauw and Christen, 2011]. The second method aligns benthic δ18O to a target core using the probabilistic alignment algorithm, HMM-Match, which also generates age uncertainty estimates [Lin et al., 2014]. The third and fourth methods are planktonic δ18O and sea surface temperature (SST) alignments to the same target core, using the alignment algorithm Match [Lisiecki and Lisiecki, 2002]. Unlike HMM-Match, Match requires parameter tuning and does not produce uncertainty estimates. The results of these four age model techniques are compared for nine high-resolution cores from the Iberian margin. The root mean square error between the individual age model results and each core's average estimated age is 1.4 kyr. Additionally, HMM-Match and Bacon age estimates agree to within uncertainty and have similar 95% confidence widths of 1-2 kyr for the highest resolution records. In one core, the planktonic and SST alignments did not fall within the 95% confidence intervals from HMM-Match. For this core, the surface proxy alignments likely produce more reliable results due to millennial-scale SST variability and the presence of several gaps in the benthic δ18O data. Similar studies of other oceanographic regions are needed to determine the spatial extents over which these climate proxies may be stratigraphically correlated.
Cognitive performance in women with fibromyalgia: A case-control study.
Pérez de Heredia-Torres, Marta; Huertas-Hoyas, Elisabet; Máximo-Bocanegra, Nuria; Palacios-Ceña, Domingo; Fernández-De-Las-Peñas, César
2016-10-01
This study aimed to evaluate the differences in cognitive skills between women with fibromyalgia and healthy women, and the correlations between functional independence and cognitive limitations. A cross-sectional study was performed. Twenty women with fibromyalgia and 20 matched controls participated. Outcomes included the Numerical Pain Rating Scale, the Functional Independence Measure, the Fibromyalgia Impact Questionnaire and Gradior © software. The Student's t-test and the Spearman's rho test were applied to the data. Women affected required a greater mean time (P < 0.020) and maximum time (P < 0.015) during the attention test than the healthy controls. In the memory test they displayed greater execution errors (P < 0.001), minimal time (P < 0.001) and mean time (P < 0.001) whereas, in the perception tests, they displayed a greater mean time (P < 0.009) and maximum time (P < 0.048). Correlations were found between the domains of the functional independence measure and the cognitive abilities assessed. Women with fibromyalgia exhibited a decreased cognitive ability compared to healthy controls, which negatively affected the performance of daily activities, such as upper limb dressing, feeding and personal hygiene. Patients required more time to perform activities requiring both attention and perception, decreasing their functional independence. Also, they displayed greater errors when performing activities requiring the use of memory. Occupational therapists treating women with fibromyalgia should consider the negative impact of possible cognitive deficits on the performance of daily activities and offer targeted support strategies. © 2016 Occupational Therapy Australia.
Conversion disorder in children and adolescents: a disorder of cognitive control.
Kozlowska, Kasia; Palmer, Donna M; Brown, Kerri J; Scher, Stephen; Chudleigh, Catherine; Davies, Fiona; Williams, Leanne M
2015-03-01
To assess cognitive function in children and adolescents presenting with acute conversion symptoms. Fifty-seven participants aged 8.5-18 years (41 girls and 16 boys) with conversion symptoms and 57 age- and gender-matched healthy controls completed the IntegNeuro neurocognitive battery, an estimate of intelligence, and self-report measures of subjective emotional distress. Participants with conversion symptoms showed poorer performance within attention, executive function, and memory domains. Poorer performance was reflected in more errors on specific tests: Switching of Attention (t(79) = 2.17, p = .03); Verbal Interference (t(72) = 2.64, p = .01); Go/No-Go (t(73) = 2.20, p = .03); Memory Recall and Verbal Learning (interference errors for memory recall; t(61) = 3.13, p < .01); and short-delay recall (t(75) = 2.05, p < .01) and long-delay recall (t(62) = 2.24, p = .03). Poorer performance was also reflected in a reduced span of working memory on the Digit Span Test for both forward recall span (t(103) = -3.64, p < .001) and backward recall span (t(100) = -3.22, p < .01). There was no difference between participants and controls on IQ estimate (t(94) = -589, p = .56), and there was no correlation between cognitive function and perceived distress. Children and adolescents with acute conversion symptoms have a reduced capacity to manipulate and retain information, to block interfering information, and to inhibit responses, all of which are required for effective attention, executive function, and memory. © 2014 The British Psychological Society.
Monjo, Florian; Forestier, Nicolas
2018-04-01
This study was designed to explore the effects of intrafusal thixotropy, a property affecting muscle spindle sensitivity, on the sense of force. For this purpose, psychophysical measurements of force perception were performed using an isometric force matching paradigm of elbow flexors consisting of matching different force magnitudes (5, 10 and 20% of subjects' maximal voluntary force). We investigated participants' capacity to match these forces after their indicator arm had undergone voluntary isometric conditioning contractions known to alter spindle thixotropy, i.e., contractions performed at long ('hold long') or short muscle lengths ('hold short'). In parallel, their reference arm was conditioned at the intermediate muscle length ('hold-test') at which the matchings were performed. The thixotropy hypothesis predicts that estimation errors should only be observed at low force levels (up to 10% of the maximal voluntary force) with overestimation of the forces produced following 'hold short' conditioning and underestimation following 'hold long' conditioning. We found the complete opposite, especially following 'hold-short' conditioning where subjects underestimated the force they generated with similar relative error magnitudes across force levels. In a second experiment, we tested the hypothesis that estimation errors depended on the degree of afferent-induced facilitation using the Kohnstamm phenomenon as a probe of motor pathway excitability. Because the stronger post-effects were observed following 'hold-short' conditioning, it appears that the conditioning-induced excitation of spindle afferents leads to force misjudgments by introducing a decoupling between the central effort and the cortical motor outputs.
Teng, Cindy; Otero, Marcela; Geraci, Marilla; Blair, R J R; Pine, Daniel S; Grillon, Christian; Blair, Karina S
2016-03-30
There is preliminary data indicating that patients with generalized anxiety disorder (GAD) show impairment on decision-making tasks requiring the appropriate representation of reinforcement value. The current study aimed to extend this literature using the passive avoidance (PA) learning task, where the participant has to learn to respond to stimuli that engender reward and avoid responding to stimuli that engender punishment. Six stimuli engendering reward and six engendering punishment are presented once per block for 10 blocks of trials. Thirty-nine medication-free patients with GAD and 29 age-, IQ and gender matched healthy comparison individuals performed the task. In addition, indexes of social functioning as assessed by the Global Assessment of Functioning (GAF) scale were obtained to allow for correlational analyzes of potential relations between cognitive and social impairments. The results revealed a Group-by-Error Type-by-Block interaction; patients with GAD committed significantly more commission (passive avoidance) errors than comparison individuals in the later blocks (blocks 7,8, and 9). In addition, the extent of impairment on these blocks was associated with their functional impairment as measured by the GAF scale. These results link GAD with anomalous decision-making and indicate that a potential problem in reinforcement representation may contribute to the severity of expression of their disorder. Copyright © 2016. Published by Elsevier Ireland Ltd.
Ansems, G E; Allen, T J; Proske, U
2006-01-01
When blindfolded subjects match the position of their forearms in the vertical plane they rely on signals coming from the periphery as well as from the central motor command. The command signal provides a positional cue from the accompanying effort sensation required to hold the arm against gravity. Here we have asked, does a centrally generated effort signal contribute to position sense in the horizontal plane, where gravity cannot play a role? Blindfolded subjects were required to match forearm position for the unloaded arm and when flexors or extensors were bearing 10%, 25% or 40% of maximum loads. Before each match the reference arm was conditioned by contracting elbow muscles while the arm was held flexed or extended. For the unloaded arm conditioning led to a consistent pattern of errors which was attributed to signals from flexor and extensor muscle spindles. When elbow muscles were loaded the errors from conditioning converged, presumably because the spindles had become coactivated through the fusimotor system during the load-bearing contraction. However, this convergence was seen only when subjects supported a static load. When they moved the load differences in errors from conditioning persisted. Muscle vibration during load bearing or moving a load did not alter the distribution of errors. It is concluded that for position sense of an unloaded arm in the horizontal plane the brain relies on signals from muscle spindles. When the arm is loaded, an additional signal of central origin contributes, but only if the load is moved. PMID:16873408
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
Dror, Itiel E; Wertheim, Kasey; Fraser-Mackenzie, Peter; Walajtys, Jeff
2012-03-01
Experts play a critical role in forensic decision making, even when cognition is offloaded and distributed between human and machine. In this paper, we investigated the impact of using Automated Fingerprint Identification Systems (AFIS) on human decision makers. We provided 3680 AFIS lists (a total of 55,200 comparisons) to 23 latent fingerprint examiners as part of their normal casework. We manipulated the position of the matching print in the AFIS list. The data showed that latent fingerprint examiners were affected by the position of the matching print in terms of false exclusions and false inconclusives. Furthermore, the data showed that false identification errors were more likely at the top of the list and that such errors occurred even when the correct match was present further down the list. These effects need to be studied and considered carefully, so as to optimize human decision making when using technologies such as AFIS. © 2011 American Academy of Forensic Sciences.
Pragmatics abilities in narrative production: a cross-disorder comparison.
Norbury, Courtenay Frazier; Gemmell, Tracey; Paul, Rhea
2014-05-01
We aimed to disentangle contributions of socio-pragmatic and structural language deficits to narrative competence by comparing the narratives of children with autism spectrum disorder (ASD; n = 25), non-autistic children with language impairments (LI; n = 23), and children with typical development (TD; n = 27). Groups were matched for age (6½ to 15 years; mean: 10;6) and non-verbal ability; ASD and TD groups were matched on standardized language scores. Despite distinct clinical presentation, children with ASD and LI produced similarly simple narratives that lacked semantic richness and omitted important story elements, when compared to TD peers. Pragmatic errors were common across groups. Within the LI group, pragmatic errors were negatively correlated with story macrostructure scores and with an index of semantic-pragmatic relevance. For the group with ASD, pragmatic errors consisted of comments that, though extraneous, did not detract from the gist of the narrative. These findings underline the importance of both language and socio-pragmatic skill for producing coherent, appropriate narratives.
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim
2016-11-01
In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.
Improving Dual-Task Control With a Posture-Second Strategy in Early-Stage Parkinson Disease.
Huang, Cheng-Ya; Chen, Yu-An; Hwang, Ing-Shiou; Wu, Ruey-Meei
2018-03-31
To examine the task prioritization effects on postural-suprapostural dual-task performance in patients with early-stage Parkinson disease (PD) without clinically observed postural symptoms. Cross-sectional study. Participants performed a force-matching task while standing on a mobile platform, and were instructed to focus their attention on either the postural task (posture-first strategy) or the force-matching task (posture-second strategy). University research laboratory. Individuals (N=16) with early-stage PD who had no clinically observed postural symptoms. Not applicable. Dual-task change (DTC; percent change between single-task and dual-task performance) of posture error, posture approximate entropy (ApEn), force error, and reaction time (RT). Positive DTC values indicate higher postural error, posture ApEn, force error, and force RT during dual-task conditions compared with single-task conditions. Compared with the posture-first strategy, the posture-second strategy was associated with smaller DTC of posture error and force error, and greater DTC of posture ApEn. In contrast, greater DTC of force RT was observed under the posture-second strategy. Contrary to typical recommendations, our results suggest that the posture-second strategy may be an effective dual-task strategy in patients with early-stage PD who have no clinically observed postural symptoms in order to reduce the negative effect of dual tasking on performance and facilitate postural automaticity. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Dusek, Wolfgang; Pierscionek, Barbara K; McClelland, Julie F
2010-05-25
To describe and compare visual function measures of two groups of school age children (6-14 years of age) attending a specialist eyecare practice in Austria; one group referred to the practice from educational assessment centres diagnosed with reading and writing difficulties and the other, a clinical age-matched control group. Retrospective clinical data from one group of subjects with reading difficulties (n = 825) and a clinical control group of subjects (n = 328) were examined.Statistical analysis was performed to determine whether any differences existed between visual function measures from each group (refractive error, visual acuity, binocular status, accommodative function and reading speed and accuracy). Statistical analysis using one way ANOVA demonstrated no differences between the two groups in terms of refractive error and the size or direction of heterophoria at distance (p > 0.05). Using predominately one way ANOVA and chi-square analyses, those subjects in the referred group were statistically more likely to have poorer distance visual acuity, an exophoric deviation at near, a lower amplitude of accommodation, reduced accommodative facility, reduced vergence facility, a reduced near point of convergence, a lower AC/A ratio and a slower reading speed than those in the clinical control group (p < 0.05). This study highlights the high proportions of visual function anomalies in a group of children with reading difficulties in an Austrian population. It confirms the importance of a full assessment of binocular visual status in order to detect and remedy these deficits in order to prevent the visual problems continuing to impact upon educational development.
Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization.
Zhang, You; Ren, Lei; Vergalasova, Irina; Yin, Fang-Fang
2017-01-01
Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams' eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV-MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.
Rew, Mary Beth; Robbins, Jooke; Mattila, David; Palsbøll, Per J; Bérube, Martine
2011-04-01
Genetic identification of individuals is now commonplace, enabling the application of tagging methods to elusive species or species that cannot be tagged by traditional methods. A key aspect is determining the number of loci required to ensure that different individuals have non-matching multi-locus genotypes. Closely related individuals are of particular concern because of elevated matching probabilities caused by their recent co-ancestry. This issue may be addressed by increasing the number of loci to a level where full siblings (the relatedness category with the highest matching probability) are expected to have non-matching multi-locus genotypes. However, increasing the number of loci to meet this "full-sib criterion" greatly increases the laboratory effort, which in turn may increase the genotyping error rate resulting in an upward-biased mark-recapture estimate of abundance as recaptures are missed due to genotyping errors. We assessed the contribution of false matches from close relatives among 425 maternally related humpback whales, each genotyped at 20 microsatellite loci. We observed a very low (0.5-4%) contribution to falsely matching samples from pairs of first-order relatives (i.e., parent and offspring or full siblings). The main contribution to falsely matching individuals from close relatives originated from second-order relatives (e.g., half siblings), which was estimated at 9%. In our study, the total number of observed matches agreed well with expectations based upon the matching probability estimated for unrelated individuals, suggesting that the full-sib criterion is overly conservative, and would have required a 280% relative increase in effort. We suggest that, under most circumstances, the overall contribution to falsely matching samples from close relatives is likely to be low, and hence applying the full-sib criterion is unnecessary. In those cases where close relatives may present a significant issue, such as unrepresentative sampling, we propose three different genotyping strategies requiring only a modest increase in effort, which will greatly reduce the number of false matches due to the presence of related individuals.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.
1987-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.
Errors made by animals in memory paradigms are not always due to failure of memory.
Wilkie, D M; Willson, R J; Carr, J A
1999-01-01
It is commonly assumed that errors in animal memory paradigms such as delayed matching to sample, radial mazes, and food-cache recovery are due to failures in memory for information necessary to perform the task successfully. A body of research, reviewed here, suggests that this is not always the case: animals sometimes make errors despite apparently being able to remember the appropriate information. In this paper a case study of this phenomenon is described, along with a demonstration of a simple procedural modification that successfully reduced these non-memory errors, thereby producing a better measure of memory.
A 3D terrain reconstruction method of stereo vision based quadruped robot navigation system
NASA Astrophysics Data System (ADS)
Ge, Zhuo; Zhu, Ying; Liang, Guanhao
2017-01-01
To provide 3D environment information for the quadruped robot autonomous navigation system during walking through rough terrain, based on the stereo vision, a novel 3D terrain reconstruction method is presented. In order to solve the problem that images collected by stereo sensors have large regions with similar grayscale and the problem that image matching is poor at real-time performance, watershed algorithm and fuzzy c-means clustering algorithm are combined for contour extraction. Aiming at the problem of error matching, duel constraint with region matching and pixel matching is established for matching optimization. Using the stereo matching edge pixel pairs, the 3D coordinate algorithm is estimated according to the binocular stereo vision imaging model. Experimental results show that the proposed method can yield high stereo matching ratio and reconstruct 3D scene quickly and efficiently.
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2003-01-01
Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...
NASA Technical Reports Server (NTRS)
Delp, P.; Crossman, E. R. F. W.; Szostak, H.
1972-01-01
The automobile-driver describing function for lateral position control was estimated for three subjects from frequency response analysis of straight road test results. The measurement procedure employed an instrumented full size sedan with known steering response characteristics, and equipped with a lateral lane position measuring device based on video detection of white stripe lane markings. Forcing functions were inserted through a servo driven double steering wheel coupling the driver to the steering system proper. Random appearing, Gaussian, and transient time functions were used. The quasi-linear models fitted to the random appearing input frequency response characterized the driver as compensating for lateral position error in a proportional, derivative, and integral manner. Similar parameters were fitted to the Gabor transformed frequency response of the driver to transient functions. A fourth term corresponding to response to lateral acceleration was determined by matching the time response histories of the model to the experimental results. The time histories show evidence of pulse-like nonlinear behavior during extended response to step transients which appear as high frequency remnant power.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
Maeda, Yoshikazu; Sato, Yoshitaka; Minami, Hiroki; Yasukawa, Yutaka; Yamamoto, Kazutaka; Tamamura, Hiroyasu; Shibata, Satoshi; Bou, Sayuri; Sasaki, Makoto; Tameshige, Yuji; Kume, Kyo; Ooto, Hiroshi; Kasahara, Shigeru; Shimizu, Yasuhiro; Saga, Yusuke; Omoya, Akira; Saitou, Makoto
2018-05-01
To evaluate the effectiveness of CT image-guided proton radiotherapy for prostate cancer by analyzing the positioning uncertainty and assessing daily dose change due to anatomical variations. Patients with prostate cancer were treated by opposed lateral proton beams based on a passive scattering method using an in-room CT image-guided system. The system employs a single couch for both CT scanning and beam delivery. The patient was positioned by matching the boundary between the prostate and the rectum's anterior region identified in the CT images to the corresponding boundary in the simulator images after bone matching. We acquired orthogonal kV x-ray images after couch movement and confirmed the body position by referring to the bony structure prior to treatment. In offline analyses, we contoured the targeted anatomical structures on 375 sets of daily in-room CT images for 10 patients. The uncertainty of the image-matching procedure was evaluated using the prostate contours and actual couch corrections. We also performed dose calculations using the same set of CT images, and evaluated daily change of dose-volume histograms (DVHs) to compare the effectiveness of the treatment using prostate matching to the bone-matching procedure. The isocenter shifts by prostate matching after bone matching were 0.5 ± 1.8 and -0.8 ± 2.6 mm along the superior-inferior (SI) and anterior-posterior (AP) directions, respectively. The body movement errors (σ) after couch movement were 0.7, 0.5, and 0.3 mm along the lateral, SI and AP direction, respectively, for 30 patients. The estimated errors (σ) in the prostate matching were 1.0 and 1.3 mm, and, in conjunction with the movement errors, the total positioning uncertainty was estimated to be 1.0 and 1.4 mm along the SI and AP directions, respectively. Daily DVH analyses showed that in the prostate matching, 98.7% and 86.1% of the total 375 irradiations maintained a dose condition of V 95% > 95% for the prostate and a dose constraint of V 77% < 18% for the rectum, whereas 90.4% and 66.1% of the total irradiations did so when bone matching was used. The dose constraint of the rectum and dose coverage of the prostate were better maintained by prostate matching than bone matching (P < 0.001). The daily variation in the dose to the seminal vesicles (SVs) was large, and only 40% of the total irradiations maintained the initial planned values of V 95% for high-risk treatment. Nevertheless, the deviations from the original value were -4 ± 7% and -5 ± 11% in the prostate and bone matching, respectively, and a better dose coverage of the SV was achieved by the prostate matching. The correction of repositioning along the AP and SI direction from conventional bone matching in CT image-guided proton therapy was found to be effective to maintain the dose constraint of the rectum and the dose coverage of the prostate. This work indicated that prostate cancer treatment by prostate matching using CT image guidance may be effective to reduce the rectal complications and achieve better tumor control of the prostate. However, an adaptive approach is desirable to maintain better dose coverage of the SVs. © 2018 American Association of Physicists in Medicine.
1980-02-01
formula for predictinq the number of errors during system testing. The equation he presents is B V/ ECRIT where B is the number of 19 ’R , errors...expected, V is the volume, and ECRIT is "the mean number of elementary discriminations between potential errors in programming" (p. 85). E CRIT can also...prediction of delivered bugs is: "V VX 2 B = V/ ECRIT -3- 13,824 2.3 McCabe’s Complexity Metric Thomas McCabe (1976) defined complexity in relation to
Brief Report: Cognitive Control of Social and Nonsocial Visual Attention in Autism
ERIC Educational Resources Information Center
DiCriscio, Antoinette Sabatino; Miller, Stephanie J.; Hanna, Eleanor K.; Kovac, Megan; Turner-Brown, Lauren; Sasson, Noah J.; Sapyta, Jeffrey; Troiani, Vanessa; Dichter, Gabriel S.
2016-01-01
Prosaccade and antisaccade errors in the context of social and nonsocial stimuli were investigated in youth with autism spectrum disorder (ASD; n = 19) a matched control sample (n = 19), and a small sample of youth with obsessive compulsive disorder (n = 9). Groups did not differ in error rates in the prosaccade condition for any stimulus…
Preliminary study of injection transients in the TPS storage ring
NASA Astrophysics Data System (ADS)
Chen, C. H.; Liu, Y. C.; Y Chen, J.; Chiu, M. S.; Tseng, F. H.; Fann, S.; Liang, C. C.; Huang, C. S.; Y Lee, T.; Y Chen, B.; Tsai, H. J.; Luo, G. H.; Kuo, C. C.
2017-07-01
An optimized injection efficiency is related to a perfect match between the pulsed magnetic fields in the storage ring and transfer line extraction in the TPS. However, misalignment errors, hardware output errors and leakage fields are unavoidable. We study the influence of injection transients on the stored TPS beam and discuss solutions to compensate these. Related simulations and measurements will be presented.
Color matching of fabric blends: hybrid Kubelka-Munk + artificial neural network based method
NASA Astrophysics Data System (ADS)
Furferi, Rocco; Governi, Lapo; Volpe, Yary
2016-11-01
Color matching of fabric blends is a key issue for the textile industry, mainly due to the rising need to create high-quality products for the fashion market. The process of mixing together differently colored fibers to match a desired color is usually performed by using some historical recipes, skillfully managed by company colorists. More often than desired, the first attempt in creating a blend is not satisfactory, thus requiring the experts to spend efforts in changing the recipe with a trial-and-error process. To confront this issue, a number of computer-based methods have been proposed in the last decades, roughly classified into theoretical and artificial neural network (ANN)-based approaches. Inspired by the above literature, the present paper provides a method for accurate estimation of spectrophotometric response of a textile blend composed of differently colored fibers made of different materials. In particular, the performance of the Kubelka-Munk (K-M) theory is enhanced by introducing an artificial intelligence approach to determine a more consistent value of the nonlinear function relationship between the blend and its components. Therefore, a hybrid K-M+ANN-based method capable of modeling the color mixing mechanism is devised to predict the reflectance values of a blend.
Huang, Pei; Tan, Yu-Yan; Liu, Dong-Qiang; Herzallah, Mohammad M; Lapidow, Elizabeth; Wang, Ying; Zang, Yu-Feng; Gluck, Mark A; Chen, Sheng-Di
2017-07-01
Asymmetric onset of motor symptoms in PD can affect cognitive function. We examined whether motor-symptom laterality could affect feedback-based associative learning and explored its underlying neural mechanism by functional magnetic resonance imaging in PD patients. We recruited 63 early-stage medication-naïve PD patients (29 left-onset medication-naïve patients, 34 right-onset medication-naïve patients) and 38 matched normal controls. Subjects completed an acquired equivalence task (including acquisition, retention, and generalization) and resting-state functional magnetic resonance imaging scans. Learning accuracy and response time in each phase of the task were recorded for behavioral measures. Regional homogeneity was used to analyze resting-state functional magnetic resonance imaging data, with regional homogeneity lateralization to evaluate hemispheric functional asymmetry in the striatum. Left-onset patients made significantly more errors in acquisition (feedback-based associative learning) than right-onset patients and normal controls, whereas right-onset patients performed as well as normal controls. There was no significant difference among these three groups in the accuracy of either retention or generalization phase. The three groups did not show significant differences in response time. In the left-onset group, there was an inverse relationship between acquisition errors and regional homogeneity in the right dorsal rostral putamen. There were no significant regional homogeneity changes in either the left or the right dorsal rostral putamen in right-onset patients when compared to controls. Motor-symptom laterality could affect feedback-based associative learning in PD, with left-onset medication-naïve patients being selectively impaired. Dysfunction in the right dorsal rostral putamen may underlie the observed deficit in associative learning in patients with left-sided onset.© 2016 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Mak, Anselm; Ren, Tao; Fu, Erin Hui-yun; Cheak, Alicia Ai-cia; Ho, Roger Chun-man
2012-06-01
To study the functional brain activation signals before and after sufficient disease control in patients with systemic lupus erythematosus (SLE) without clinical neuropsychiatric symptoms. Blood-oxygen-level-dependent signals during event-related functional magnetic resonance imaging brain were recorded, while 14 new-onset SLE patients and 14 demographically and intelligence quotient matched healthy controls performed the computer-based Wisconsin card sorting test for assessing executive function, which probes strategic planning and goal-directed task performance during feedback evaluation (FE) and response selection (RS), respectively. Composite beta maps were constructed by a general linear model to identify regions of cortical activation. Blood-oxygen-level-dependent functional magnetic resonance imaging signals were compared between (1) new-onset SLE patients and healthy controls and (2) SLE patients before and after sufficient control of their disease activity. During RS, SLE patients demonstrated significantly higher activation than healthy controls in both caudate bodies and Brodmann area (BA) 9 to enhance event anticipation, attention, and working memory, respectively, to compensate for the reduced activation during FE in BA6, 13, 24, and 32, which serve complex motor planning and decision-making, sensory integration, error detection, and conflict processing, respectively. Despite significant reduction of SLE activity, BA32 was activated during RS to compensate for reduced activation during FE in BA6, 9, 37, and 23/32, which serve motor planning, response inhibition and attention, color processing and word recognition, error detection, and conflict evaluation, respectively. Even without clinically overt neuropsychiatric symptoms, SLE patients recruited additional pathways to execute goal-directed tasks to compensate for their reduced strategic planning skill despite clinically sufficient disease control. Copyright © 2012 Elsevier Inc. All rights reserved.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
Programmable Infusion Pumps in ICUs: An Analysis of Corresponding Adverse Drug Events
Bower, Anthony G.; Paddock, Susan M.; Hilborne, Lee H.; Wallace, Peggy; Rothschild, Jeffrey M.; Griffin, Anne; Fairbanks, Rollin J.; Carlson, Beverly; Panzer, Robert J.; Brook, Robert H.
2007-01-01
Background Patients in intensive care units (ICUs) frequently experience adverse drug events involving intravenous medications (IV-ADEs), which are often preventable. Objectives To determine how frequently preventable IV-ADEs in ICUs match the safety features of a programmable infusion pump with safety software (“smart pump”) and to suggest potential improvements in smart-pump design. Design Using retrospective medical-record review, we examined preventable IV-ADEs in ICUs before and after 2 hospitals replaced conventional pumps with smart pumps. The smart pumps alerted users when programmed to deliver duplicate infusions or continuous-infusion doses outside hospital-defined ranges. Participants 4,604 critically ill adults at 1 academic and 1 nonacademic hospital. Measurements Preventable IV-ADEs matching smart-pump features and errors involved in preventable IV-ADEs. Results Of 100 preventable IV-ADEs identified, 4 involved errors matching smart-pump features. Two occurred before and 2 after smart-pump implementation. Overall, 29% of preventable IV-ADEs involved overdoses; 37%, failures to monitor for potential problems; and 45%, failures to intervene when problems appeared. Error descriptions suggested that expanding smart pumps’ capabilities might enable them to prevent more IV-ADEs. Conclusion The smart pumps we evaluated are unlikely to reduce preventable IV-ADEs in ICUs because they address only 4% of them. Expanding smart-pump capabilities might prevent more IV-ADEs. PMID:18095043
Sequential effects in pigeon delayed matching-to-sample performance.
Roitblat, H L; Scopatz, R A
1983-04-01
Pigeons were tested in a three-alternative delayed matching-to-sample task in which second-choices were permitted following first-choice errors. Sequences of responses both within and between trials were examined in three experiments. The first experiment demonstrates that the sample information contained in first-choice errors is not sufficient to account for the observed pattern of second choices. This result implies that second-choices following first-choice errors are based on a second examination of the contents of working memory. Proactive interference was found in the second experiment in the form of a dependency, beyond that expected on the basis of trial independent response bias, of first-choices from one trial on the first-choice emitted on the previous trial. Samples from the previous trial were not found to exert a significant influence on later trials. The magnitude of the intertrial association (Experiment 3) did not depend on the duration of the intertrial interval. In contrast, longer intertrial intervals and longer sample durations did facilitate choice accuracy, by strengthening the association between current samples and choices. These results are incompatible with a trace-decay and competition model; they suggest strongly that multiple influences act simultaneously and independently to control delayed matching-to-sample responding. These multiple influences include memory for the choice occurring on the previous trial, memory for the sample, and general effects of trial spacing.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry B.; Geyer, Amy; Borrego, David
Purpose: To investigate the benefits and limitations of patient-phantom matching for determining organ dose during fluoroscopy guided interventions. Methods: In this study, 27 CT datasets representing patients of different sizes and genders were contoured and converted into patient-specific computational models. Each model was matched, based on height and weight, to computational phantoms selected from the UF hybrid patient-dependent series. In order to investigate the influence of phantom type on patient organ dose, Monte Carlo methods were used to simulate two cardiac projections (PA/left lateral) and two abdominal projections (RAO/LPO). Organ dose conversion coefficients were then calculated for each patient-specific andmore » patient-dependent phantom and also for a reference stylized and reference hybrid phantom. The coefficients were subsequently analyzed for any correlation between patient-specificity and the accuracy of the dose estimate. Accuracy was quantified by calculating an absolute percent difference using the patient-specific dose conversion coefficients as the reference. Results: Patient-phantom matching was shown most beneficial for estimating the dose to heavy patients. In these cases, the improvement over using a reference stylized phantom ranged from approximately 50% to 120% for abdominal projections and for a reference hybrid phantom from 20% to 60% for all projections. For lighter individuals, patient-phantom matching was clearly superior to using a reference stylized phantom, but not significantly better than using a reference hybrid phantom for certain fields and projections. Conclusions: The results indicate two sources of error when patients are matched with phantoms: Anatomical error, which is inherent due to differences in organ size and location, and error attributed to differences in the total soft tissue attenuation. For small patients, differences in soft tissue attenuation are minimal and are exceeded by inherent anatomical differences. For large patients, difference in soft tissue attenuation can be large. In these cases, patient-phantom matching proves most effective as differences in soft tissue attenuation are mitigated. With increasing obesity rates, overweight patients will continue to make up a growing fraction of all patients undergoing medical imaging. Thus, having phantoms that better represent this population represents a considerable improvement over previous methods. In response to this study, additional phantoms representing heavier weight percentiles will be added to the UFHADM and UFHADF patient-dependent series.« less
Color discrimination performance in patients with Alzheimer's disease.
Salamone, Giovanna; Di Lorenzo, Concetta; Mosti, Serena; Lupo, Federica; Cravello, Luca; Palmer, Katie; Musicco, Massimo; Caltagirone, Carlo
2009-01-01
Visual deficits are frequent in Alzheimer's disease (AD), yet little is known about the nature of these disturbances. The aim of the present study was to investigate color discrimination in patients with AD to determine whether impairment of this visual function is a cognitive or perceptive/sensory disturbance. A cross-sectional clinical study was conducted in a specialized dementia unit on 20 patients with mild/moderate AD and 21 age-matched normal controls. Color discrimination was measured by the Farnsworth-Munsell 100 hue test. Cognitive functioning was measured with the Mini-Mental State Examination (MMSE) and a comprehensive battery of neuropsychological tests. The scores obtained on the color discrimination test were compared between AD patients and controls adjusting for global and domain-specific cognitive performance. Color discrimination performance was inversely related to MMSE score. AD patients had a higher number of errors in color discrimination than controls (mean +/- SD total error score: 442.4 +/- 84.5 vs. 304.1 +/- 45.9). This trend persisted even after adjustment for MMSE score and cognitive performance on specific cognitive domains. A specific reduction of color discrimination capacity is present in AD patients. This deficit does not solely depend upon cognitive impairment, and involvement of the primary visual cortex and/or retinal ganglionar cells may be contributory.
Self-Interaction Error in Density Functional Theory: An Appraisal.
Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G
2018-05-03
Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
Li, Wenxun; Matin, Leonard
2005-03-01
Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.
Tran, Tammy T; Speck, Caroline L; Pisupati, Aparna; Gallagher, Michela; Bakker, Arnold
2017-01-01
Increased fMRI activation in the hippocampus is recognized as a signature characteristic of the amnestic mild cognitive impairment (aMCI) stage of Alzheimer's disease (AD). Previous work has localized this increased activation to the dentate gyrus/CA3 subregion of the hippocampus and showed a correlation with memory impairments in those patients. Increased hippocampal activation has also been reported in carriers of the ApoE-4 allelic variation independently of mild cognitive impairment although these findings were not localized to a hippocampal subregion. To assess the ApoE-4 contribution to increased hippocampal fMRI activation, patients with aMCI genotyped for ApoE-4 status and healthy age-matched control participants completed a high-resolution fMRI scan while performing a memory task designed to tax hippocampal subregion specific functions. Consistent with previous reports, patients with aMCI showed increased hippocampal activation in the left dentate gyrus/CA3 region of the hippocampus as well as memory task errors attributable to this subregion. However, this increased fMRI activation in the hippocampus did not differ between ApoE-4 carriers and ApoE-4 non-carriers and the proportion of memory errors attributable to dentate gyrus/CA3 function did not differ between ApoE-4 carriers and ApoE-4 non-carriers. These results indicate that increased fMRI activation of the hippocampus observed in patients with aMCI is independent of ApoE-4 status and that ApoE-4 does not contribute to the dysfunctional hippocampal activation or the memory errors attributable to this subregion in these patients.
Bakic, Jasmina; Pourtois, Gilles; Jepma, Marieke; Duprat, Romain; De Raedt, Rudi; Baeken, Chris
2017-01-01
Major depressive disorder (MDD) creates debilitating effects on a wide range of cognitive functions, including reinforcement learning (RL). In this study, we sought to assess whether reward processing as such, or alternatively the complex interplay between motivation and reward might potentially account for the abnormal reward-based learning in MDD. A total of 35 treatment resistant MDD patients and 44 age matched healthy controls (HCs) performed a standard probabilistic learning task. RL was titrated using behavioral, computational modeling and event-related brain potentials (ERPs) data. MDD patients showed comparable learning rate compared to HCs. However, they showed decreased lose-shift responses as well as blunted subjective evaluations of the reinforcers used during the task, relative to HCs. Moreover, MDD patients showed normal internal (at the level of error-related negativity, ERN) but abnormal external (at the level of feedback-related negativity, FRN) reward prediction error (RPE) signals during RL, selectively when additional efforts had to be made to establish learning. Collectively, these results lend support to the assumption that MDD does not impair reward processing per se during RL. Instead, it seems to alter the processing of the emotional value of (external) reinforcers during RL, when additional intrinsic motivational processes have to be engaged. © 2016 Wiley Periodicals, Inc.
Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications
2005-04-01
coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.
Performance of biometric quality measures.
Grother, Patrick; Tabassi, Elham
2007-04-01
We document methods for the quantitative evaluation of systems that produce a scalar summary of a biometric sample's quality. We are motivated by a need to test claims that quality measures are predictive of matching performance. We regard a quality measurement algorithm as a black box that converts an input sample to an output scalar. We evaluate it by quantifying the association between those values and observed matching results. We advance detection error trade-off and error versus reject characteristics as metrics for the comparative evaluation of sample quality measurement algorithms. We proceed this with a definition of sample quality, a description of the operational use of quality measures. We emphasize the performance goal by including a procedure for annotating the samples of a reference corpus with quality values derived from empirical recognition scores.
Vercruyssen, Anina; Wuyts, Celine; Loosveldt, Geert
2017-09-01
Interviewer characteristics affect nonresponse and measurement errors in face-to-face surveys. Some studies have shown that mismatched sociodemographic characteristics - for example gender - affect people's behavior when interacting with an interviewer at the door and during the survey interview, resulting in more nonresponse. We investigate the effect of sociodemographic (mis)matching on nonresponse in two successive rounds of the European Social Survey in Belgium. As such, we replicate the analyses of the effect of (mis)matching gender and age on unit nonresponse on the one hand, and of gender, age and education level (mis)matching on item nonresponse on the other hand. Recurring effects of sociodemographic (mis)match are found for both unit and item nonresponse. Copyright © 2017 Elsevier Inc. All rights reserved.
Evidence for a distributed hierarchy of action representation in the brain
Grafton, Scott T.; de C. Hamilton, Antonia F.
2007-01-01
Complex human behavior is organized around temporally distal outcomes. Behavioral studies based on tasks such as normal prehension, multi-step object use and imitation establish the existence of relative hierarchies of motor control. The retrieval errors in apraxia also support the notion of a hierarchical model for representing action in the brain. In this review, three functional brain imaging studies of action observation using the method of repetition suppression are used to identify a putative neural architecture that supports action understanding at the level of kinematics, object centered goals and ultimately, motor outcomes. These results, based on observation, may match a similar functional anatomic hierarchy for action planning and execution. If this is true, then the findings support a functional anatomic model that is distributed across a set of interconnected brain areas that are differentially recruited for different aspects of goal oriented behavior, rather than a homogeneous mirror neuron system for organizing and understanding all behavior. PMID:17706312
Supermodeling by Synchronization of Alternative SPEEDO Models
NASA Astrophysics Data System (ADS)
Duane, Gregory; Selten, Frank
2016-04-01
The supermodeling approach, wherein different imperfect models of the same objective process are dynamically combined in run-time to reduce systematic error, is tested using SPEEDO - a primitive equation atmospheric model coupled to the CLIO ocean model. Three versions of SPEEDO are defined by parameters that differ in a range that arguably mimics differences among state-of-the-art climate models. A fourth model is taken to represent truth. The "true" ocean drives all three model atmospheres. The three models are also connected to one another at every level, with spatially uniform nudging coefficients that are trained so that the three models, which synchronize with one another, also synchronize with truth when data is continuously assimilated, as in weather prediction. The SPEEDO supermodel is evaluated in weather-prediction mode, with nudging to truth. It is found that the supemodel performs better than any of the three models and marginally better than the best weighted average of the outputs of the three models run separately. To evaluate the utility for climate projection, parameters corresponding to green house gas levels are changed in truth and in the three models. The supermodel formed with inter-model connections from the present-CO2 runs no longer give the optimal configuration for the supermodel in the doubled-CO2 realm, but the supermodel with the previously trained connections is still useful as compared to the separate models or averages of their outputs. In ongoing work, a training algorithm is examined that attempts to match the blocked-zonal index cycle of the SPEEDO model atmosphere to truth, rather than simply minimizing the RMS error in the various fields. Such an approach comes closer to matching the model attractor to the true attractor - the desired effect in climate projection - rather than matching instantaneous states. Gradient descent in a cost function defined over a finite temporal window can indeed be done efficiently. Preliminary results are presented for a crudely defined index cycle.
A Novel Analog Reasoning Paradigm: New Insights in Intellectually Disabled Patients.
Curie, Aurore; Brun, Amandine; Cheylus, Anne; Reboul, Anne; Nazir, Tatjana; Bussy, Gérald; Delange, Karine; Paulignan, Yves; Mercier, Sandra; David, Albert; Marignier, Stéphanie; Merle, Lydie; de Fréminville, Bénédicte; Prieur, Fabienne; Till, Michel; Mortemousque, Isabelle; Toutain, Annick; Bieth, Eric; Touraine, Renaud; Sanlaville, Damien; Chelly, Jamel; Kong, Jian; Ott, Daniel; Kassai, Behrouz; Hadjikhani, Nouchine; Gollub, Randy L; des Portes, Vincent
2016-01-01
Intellectual Disability (ID) is characterized by deficits in intellectual functions such as reasoning, problem-solving, planning, abstract thinking, judgment, and learning. As new avenues are emerging for treatment of genetically determined ID (such as Down's syndrome or Fragile X syndrome), it is necessary to identify objective reliable and sensitive outcome measures for use in clinical trials. We developed a novel visual analogical reasoning paradigm, inspired by the Progressive Raven's Matrices, but appropriate for Intellectually Disabled patients. This new paradigm assesses reasoning and inhibition abilities in ID patients. We performed behavioural analyses for this task (with a reaction time and error rate analysis, Study 1) in 96 healthy controls (adults and typically developed children older than 4) and 41 genetically determined ID patients (Fragile X syndrome, Down syndrome and ARX mutated patients). In order to establish and quantify the cognitive strategies used to solve the task, we also performed an eye-tracking analysis (Study 2). Down syndrome, ARX and Fragile X patients were significantly slower and made significantly more errors than chronological age-matched healthy controls. The effect of inhibition on error rate was greater than the matrix complexity effect in ID patients, opposite to findings in adult healthy controls. Interestingly, ID patients were more impaired by inhibition than mental age-matched healthy controls, but not by the matrix complexity. Eye-tracking analysis made it possible to identify the strategy used by the participants to solve the task. Adult healthy controls used a matrix-based strategy, whereas ID patients used a response-based strategy. Furthermore, etiologic-specific reasoning differences were evidenced between ID patients groups. We suggest that this paradigm, appropriate for ID patients and developmental populations as well as adult healthy controls, provides an objective and quantitative assessment of visual analogical reasoning and cognitive inhibition, enabling testing for the effect of pharmacological or behavioural intervention in these specific populations.
NASA Astrophysics Data System (ADS)
Sweeney, K.; Major, J. J.
2016-12-01
Advances in structure-from-motion (SfM) photogrammetry and point cloud comparison have fueled a proliferation of studies using modern imagery to monitor geomorphic change. These techniques also have obvious applications for reconstructing historical landscapes from vertical aerial imagery, but known challenges include insufficient photo overlap, systematic "doming" induced by photo-spacing regularity, missing metadata, and lack of ground control. Aerial imagery of landscape change in the North Fork Toutle River (NFTR) following the 1980 eruption of Mount St. Helens is a prime dataset to refine methodologies. In particular, (1) 14-μm film scans are available for 1:9600 images at 4-month intervals from 1980 - 1986, (2) the large magnitude of landscape change swamps systematic error and noise, and (3) stable areas (primary deposit features, roads, etc.) provide targets for both ground control and matching to modern lidar. Using AgiSoft PhotoScan, we create digital surface models from the NFTR imagery and examine how common steps in SfM workflows affect results. Tests of scan quality show high-resolution, professional film scans are superior to office scans of paper prints, reducing spurious points related to scan infidelity and image damage. We confirm earlier findings that cropping and rotating images improves point matching and the final surface model produced by the SfM algorithm. We demonstrate how the iterative closest point algorithm, implemented in CloudCompare and using modern lidar as a reference dataset, can serve as an adequate substitute for absolute ground control. Elevation difference maps derived from our surface models of Mount St. Helens show patterns consistent with field observations, including channel avulsion and migration, though systematic errors remain. We suggest that subtracting an empirical function fit to the long-wavelength topographic signal may be one avenue for correcting systematic error in similar datasets.
Variational stereo imaging of oceanic waves with statistical constraints.
Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise
2013-11-01
An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Extending color primary set in spectral vector error diffusion by multilevel halftoning
NASA Astrophysics Data System (ADS)
Norberg, Ole; Nyström, Daniel
2013-02-01
Ever since its origin in the late 19th century, a color reproduction technology has relied on a trichromatic color reproduction approach. This has been a very successful method and also fundamental for the development of color reproduction devices. Trichromatic color reproduction is sufficient to approximate the range of colors perceived by the human visual system. However, tricromatic systems only have the ability to match colors when the viewing illumination for the reproduction matches that of the original. Furthermore, the advancement of digital printing technology has introduced printing systems with additional color channels. These additional color channels are used to extend the tonal range capabilities in light and dark regions and to increase color gamut. By an alternative approach the addition color channels can also be used to reproduce the spectral information of the original color. A reproduced spectral match will always correspond to original independent of lighting situation. On the other hand, spectral color reproductions also introduce a more complex color processing by spectral color transfer functions and spectral gamut mapping algorithms. In that perspective, spectral vector error diffusion (sVED) look like a tempting approach with a simple workflow where the inverse color transfer function and halftoning is performed simultaneously in one single operation. Essential for the sVED method are the available color primaries, created by mixing process colors. Increased numbers of as well as optimal spectral characteristics of color primaries are expected to significantly improve the color accuracy of the spectral reproduction. In this study, sVED in combination with multilevel halftoning has been applied on a ten channel inkjet system. The print resolution has been reduced and the underlying physical high resolution of the printer has been used to mix additional primaries. With ten ink channels and halfton cells built-up by 2x2 micro dots where each micro dot can be a combination of all ten inks the number of possible ink combinations gets huge. Therefore, the initial study has been focused on including lighter colors to the intrinsic primary set. Results from this study shows that by this approach the color reproduction accuracy increases significantly. The RMS spectral difference to target color for multilevel halftoning is less than 1/6 of the difference achieved by binary halftoning.
Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.
Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R
2016-07-12
In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
Use of a control test to aid pH assessment of chemical eye injuries.
Connor, A J; Severn, P
2009-11-01
Chemical burns of the eye represent 7.0%-9.9% of all ocular trauma. Initial management of ocular chemical injuries is irrigation of the eye and conjunctival sac until neutralisation of the tear surface pH is achieved.We present a case of alkali injury in which the raised tear film pH seemed to be unresponsive to irrigation treatment. Suspicion was raised about the accuracy of the litmus paper used to test the tear film pH. The error was confirmed by use of a control litmus pH test of the examining doctor's eyes. Errors in litmus paper pH measurement can occur because of difficulty in matching the paper with scale colours and drying of the paper, which produces a darker colour. A small tear film sample can also create difficulty in colour matching, whereas too large a sample can wash away pigment from the litmus paper. Samples measured too quickly after irrigation can result in a falsely neutral pH measurement. Use of faulty or inappropriate materials can also result in errors. We advocate the use of control litmus pH test in all patients. This would highlight errors in pH measurements and aid in the detection of the end point of irrigation.
Physical Activity and Function in Older, Long-term Colorectal Cancer Survivors
Johnson, Brent L.; Trentham-Dietz, Amy; Koltyn, Kelli F.; Colbert, Lisa H.
2009-01-01
Objective Increasing age and cancer history are related to impaired physical function. Since physical activity has been shown to ameliorate age-related functional declines, we evaluated the association between physical activity and function in older, long-term colorectal cancer survivors. Methods In 2006–2007, mailed surveys were sent to colorectal cancer survivors, aged ≥65 years when diagnosed during 1995 – 2000, and identified through a state cancer registry. Information on physical activity, physical function and relevant covariates was obtained and matched to registry data. Analysis of covariance and linear regression were used to compare means and trends in physical function across levels of activity in the final analytic sample of 843 cases. Results A direct, dose-dependent association between physical activity and function was observed (ptrend <.001), with higher SF-36 physical function subscores in those reporting high vs. low activity levels (65.0 ± 1.7 vs. 42.7 ± 1.7 (mean ± standard error)). Walking, gardening, housework, and exercise activities were all independently related to better physical function. Moderate-vigorous intensity activity (ptrend <.001) was associated with function, but light activity (ptrend =0.39) was not. Conclusion Results from this cross-sectional study indicate significant associations between physical activity and physical function in older, long-term colorectal cancer survivors. PMID:19123055
Potential Audiological and MRI Markers of Tinnitus.
Gopal, Kamakshi V; Thomas, Binu P; Nandy, Rajesh; Mao, Deng; Lu, Hanzhang
2017-09-01
Subjective tinnitus, or ringing sensation in the ear, is a common disorder with no accepted objective diagnostic markers. The purpose of this study was to identify possible objective markers of tinnitus by combining audiological and imaging-based techniques. Case-control studies. Twenty adults drawn from our audiology clinic served as participants. The tinnitus group consisted of ten participants with chronic bilateral constant tinnitus, and the control group consisted of ten participants with no history of tinnitus. Each participant with tinnitus was closely matched with a control participant on the basis of age, gender, and hearing thresholds. Data acquisition focused on systematic administration and evaluation of various audiological tests, including auditory-evoked potentials (AEP) and otoacoustic emissions, and magnetic resonance imaging (MRI) tests. A total of 14 objective test measures (predictors) obtained from audiological and MRI tests were subjected to statistical analyses to identify the best predictors of tinnitus group membership. The least absolute shrinkage and selection operator technique for feature extraction, supplemented by the leave-one-out cross-validation technique, were used to extract the best predictors. This approach provided a conservative model that was highly regularized with its error within 1 standard error of the minimum. The model selected increased frontal cortex (FC) functional MRI activity to pure tones matching their respective tinnitus pitch, and augmented AEP wave N₁ amplitude growth in the tinnitus group as the top two predictors of tinnitus group membership. These findings suggest that the amplified responses to acoustic signals and hyperactivity in attention regions of the brain may be a result of overattention among individuals that experience chronic tinnitus. These results suggest that increased functional MRI activity in the FC to sounds and augmented N₁ amplitude growth may potentially be the objective diagnostic indicators of tinnitus. However, due to the small sample size and lack of subgroups within the tinnitus population in this study, more research is needed before generalizing these findings. American Academy of Audiology
Modularization of gradient-index optical design using wavefront matching enabled optimization.
Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H
2016-05-02
This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
Rodríguez-Bailón, María; Montoro-Membila, Nuria; Garcia-Morán, Tamara; Arnedo-Montoro, María Luisa; Funes Molina, María Jesús
2015-01-01
In the present study we explored cognitive and functional deficits in patients with multidomain mild cognitive impairment (MCI), patients with dementia, and healthy age-matched control participants using the Cognitive Scale for Basic and Instrumental Activities of Daily Living, a new preliminary informant-based assessment tool. This tool allowed us to evaluate four key cognitive abilities-task memory schema, error detection, problem solving, and task self-initiation-in a range of basic and instrumental activities of daily living (BADL and IADL, respectively). The first part of the present study was devoted to testing the psychometric adequateness of this new informant-based tool and its convergent validity with other global functioning and neuropsychological measures. The second part of the study was aimed at finding the patterns of everyday cognitive factors that best discriminate between the three groups. We found that patients with dementia exhibited impairment in all cognitive abilities in both basic and instrumental activities. By contrast, patients with MCI were found to have preserved task memory schema in both types of ADL; however, such patients exhibited deficits in error detection and task self-initiation but only in IADL. Finally, patients with MCI also showed a generalized problem solving deficit that affected even BADL. Studying various cognitive processes instantiated in specific ADL differing in complexity seems a promising strategy to further understand the specific relationships between cognition and function in these and other cognitively impaired populations.
Feedback and reward processing in high-functioning autism.
Larson, Michael J; South, Mikle; Krauskopf, Erin; Clawson, Ann; Crowley, Michael J
2011-05-15
Individuals with high-functioning autism often display deficits in social interactions and high-level cognitive functions. Such deficits may be influenced by poor ability to process feedback and rewards. The feedback-related negativity (FRN) is an event-related potential (ERP) that is more negative following losses than gains. We examined FRN amplitude in 25 individuals with Autism Spectrum Disorder (ASD) and 25 age- and IQ-matched typically developing control participants who completed a guessing task with monetary loss/gain feedback. Both groups demonstrated a robust FRN that was more negative to loss trials than gain trials; however, groups did not differ in FRN amplitude as a function of gain or loss trials. N1 and P300 amplitudes did not differentiate groups. FRN amplitude was positively correlated with age in individuals with ASD, but not measures of intelligence, anxiety, behavioral inhibition, or autism severity. Given previous findings of reduced-amplitude error-related negativity (ERN) in ASD, we propose that individuals with ASD may process external, concrete, feedback similar to typically developing individuals, but have difficulty with internal, more abstract, regulation of performance. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Comparison of Alternate and Original Items on the Montreal Cognitive Assessment.
Lebedeva, Elena; Huang, Mei; Koski, Lisa
2016-03-01
The Montreal Cognitive Assessment (MoCA) is a screening tool for mild cognitive impairment (MCI) in elderly individuals. We hypothesized that measurement error when using the new alternate MoCA versions to monitor change over time could be related to the use of items that are not of comparable difficulty to their corresponding originals of similar content. The objective of this study was to compare the difficulty of the alternate MoCA items to the original ones. Five selected items from alternate versions of the MoCA were included with items from the original MoCA administered adaptively to geriatric outpatients (N = 78). Rasch analysis was used to estimate the difficulty level of the items. None of the five items from the alternate versions matched the difficulty level of their corresponding original items. This study demonstrates the potential benefits of a Rasch analysis-based approach for selecting items during the process of development of parallel forms. The results suggest that better match of the items from different MoCA forms by their difficulty would result in higher sensitivity to changes in cognitive function over time.
NASA Technical Reports Server (NTRS)
Lancaster, J. E.
1973-01-01
Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical solution to the problem on N-bodies. The earlier first-order solutions, derived by the method of matched asymptotic expansions, have been extended to second order for the purpose of obtaining increased accuracy. The derivation of the second-order solution is summarized by showing the essential steps, some in functional form. The general asymptotic solution has been used as a basis for formulating a number of analytical two-point boundary value solutions. These include earth-to-moon, one- and two-impulse moon-to-earth, and interplanetary solutions. The results show that the accuracies of the asymptotic solutions range from an order of magnitude better than conic approximations to that of numerical integration itself. Also, since no iterations are required, the asymptotic boundary value solutions are obtained in a fraction of the time required for comparable numerically integrated solutions. The subject of minimizing the second-order error is discussed, and recommendations made for further work directed toward achieving a uniform accuracy in all applications.
Egorov, V N; Razumnikova, O M; Perfil'ev, A M; Stupak, V V
2015-01-01
To compare parameters of attention in healthy people and patients with neoplasms in different regions of the cerebral cortex and to evaluate quality of life (QoL) indices with regard to impairment of different attention systems. Twenty patients with oncological lesions of the brain (mean age 56.5±8.8 years) who did not undergo surgery were studied. Tumor localization was confirmed using contrast-enhanced computed tomography, the tumor type was histologically verified. A control group included 18 healthy people matched for age, sex and education level. To determine attention system functions, we developed a computed version of the Attention Network Test. Error rate and reaction time for correct responses to the target stimulus, displayed along with neutral, congruent and incongruent signals, were the indicators of the efficacy of selective processes. QoL indices were assessed using SF-36 health survey questionnaire. The readiness to respond to incoming stimuli was mostly impaired in patients with brain tumors. Efficacy of executive attention, assessed as the increase in the number of errors in selection of visual stimuli, was decreased while temporary parameters of the functions of this system were not changed in patients compared to controls. The SF-36 total score was stable in patients with marked reduction in scores on the Role and Emotional Functioning scales. The most severe health impairment measured on the SF-36 scales of role/social emotional functioning and viability was recorded in patients with the lesions of frontal cortical areas compared to temporal/parietal areas. The relationship between SF-36 Health self-rating and attention systems was found. This finding puts the question of the importance of attention characteristics and QoL for survival prognosis of patients with brain tumors.
Design of analytical failure detection using secondary observers
NASA Technical Reports Server (NTRS)
Sisar, M.
1982-01-01
The problem of designing analytical failure-detection systems (FDS) for sensors and actuators, using observers, is addressed. The use of observers in FDS is related to the examination of the n-dimensional observer error vector which carries the necessary information on possible failures. The problem is that in practical systems, in which only some of the components of the state vector are measured, one has access only to the m-dimensional observer-output error vector, with m or = to n. In order to cope with these cases, a secondary observer is synthesized to reconstruct the entire observer-error vector from the observer output error vector. This approach leads toward the design of highly sensitive and reliable FDS, with the possibility of obtaining a unique fingerprint for every possible failure. In order to keep the observer's (or Kalman filter) false-alarm rate under a certain specified value, it is necessary to have an acceptable matching between the observer (or Kalman filter) models and the system parameters. A previously developed adaptive observer algorithm is used to maintain the desired system-observer model matching, despite initial mismatching or system parameter variations. Conditions for convergence for the adaptive process are obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors, while accurate and fast parameter identification, in both deterministic and stochastic cases, is obtained.
Xu, Xin; Zhang, Qingsong; Muller, Richard P; Goddard, William A
2005-01-01
We derive here the form for the exact exchange energy density for a density that decays with Gaussian-type behavior at long range. This functional is intermediate between the B88 and the PW91 exchange functionals. Using this modified functional to match the form expected for Gaussian densities, we propose the X3LYP extended functional. We find that X3LYP significantly outperforms Becke three parameter Lee-Yang-Parr (B3LYP) for describing van der Waals and hydrogen bond interactions, while performing slightly better than B3LYP for predicting heats of formation, ionization potentials, electron affinities, proton affinities, and total atomic energies as validated with the extended G2 set of atoms and molecules. Thus X3LYP greatly enlarges the field of applications for density functional theory. In particular the success of X3LYP in describing the water dimer (with R(e) and D(e) within the error bars of the most accurate determinations) makes it an excellent candidate for predicting accurate ligand-protein and ligand-DNA interactions. (c) 2005 American Institute of Physics.
Multiconfiguration Pair-Density Functional Theory Is Free From Delocalization Error.
Bao, Junwei Lucas; Wang, Ying; He, Xiao; Gagliardi, Laura; Truhlar, Donald G
2017-11-16
Delocalization error has been singled out by Yang and co-workers as the dominant error in Kohn-Sham density functional theory (KS-DFT) with conventional approximate functionals. In this Letter, by computing the vertical first ionization energy for well separated He clusters, we show that multiconfiguration pair-density functional theory (MC-PDFT) is free from delocalization error. To put MC-PDFT in perspective, we also compare it with some Kohn-Sham density functionals, including both traditional and modern functionals. Whereas large delocalization errors are almost universal in KS-DFT (the only exception being the very recent corrected functionals of Yang and co-workers), delocalization error is removed by MC-PDFT, which bodes well for its future as a step forward from KS-DFT.
Analysis of key technologies in geomagnetic navigation
NASA Astrophysics Data System (ADS)
Zhang, Xiaoming; Zhao, Yan
2008-10-01
Because of the costly price and the error accumulation of high precise Inertial Navigation Systems (INS) and the vulnerability of Global Navigation Satellite Systems (GNSS), the geomagnetic navigation technology, a passive autonomous navigation method, is paid attention again. Geomagnetic field is a natural spatial physical field, and is a function of position and time in near earth space. The navigation technology based on geomagnetic field is researched in a wide range of commercial and military applications. This paper presents the main features and the state-of-the-art of Geomagnetic Navigation System (GMNS). Geomagnetic field models and reference maps are described. Obtaining, modeling and updating accurate Anomaly Magnetic Field information is an important step for high precision geomagnetic navigation. In addition, the errors of geomagnetic measurement using strapdown magnetometers are analyzed. The precise geomagnetic data is obtained by means of magnetometer calibration and vehicle magnetic field compensation. According to the measurement data and reference map or model of geomagnetic field, the vehicle's position and attitude can be obtained using matching algorithm or state-estimating method. The tendency of geomagnetic navigation in near future is introduced at the end of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Stochastic reservoir simulation for the modeling of uncertainty in coal seam degasification
Karacan, C. Özgen; Olea, Ricardo A.
2018-01-01
Coal seam degasification improves coal mine safety by reducing the gas content of coal seams and also by generating added value as an energy source. Coal seam reservoir simulation is one of the most effective ways to help with these two main objectives. As in all modeling and simulation studies, how the reservoir is defined and whether observed productions can be predicted are important considerations. Using geostatistical realizations as spatial maps of different coal reservoir properties is a more realistic approach than assuming uniform properties across the field. In fact, this approach can help with simultaneous history matching of multiple wellbores to enhance the confidence in spatial models of different coal properties that are pertinent to degasification. The problem that still remains is the uncertainty in geostatistical simulations originating from the partial sampling of the seam that does not properly reflect the stochastic nature of coal property realizations. Stochastic simulations and using individual realizations, rather than E-type, make evaluation of uncertainty possible. This work is an advancement over Karacan et al. (2014) in the sense of assessing uncertainty that stems from geostatistical maps. In this work, we batched 100 individual realizations of 10 coal properties that were randomly generated to create 100 bundles and used them in 100 separate coal seam reservoir simulations for simultaneous history matching. We then evaluated the history matching errors for each bundle and defined the single set of realizations that would minimize the error for all wells. We further compared the errors with those of E-type and the average realization of the best matches. Unlike in Karacan et al. (2014), which used E-type maps and average of quantile maps, using these 100 bundles created 100 different history match results from separate simulations, and distributions of results for in-place gas quantity, for example, from which uncertainty in coal property realizations could be evaluated. The study helped to determine the realization bundle that consisted of the spatial maps of coal properties, which resulted in minimum error. In addition, it was shown that both E-type and the average of realizations that gave the best match for invidual approximated the same properties resonably. Moreover, the determined realization bundle showed that the study field initially had 151.5 million m3 (cubic meter) of gas and 1.04 million m3 water in the coal, corresponding to Q90 of the entire range of probability for gas and close to Q75 for water. In 2013, in-place fluid amounts decreased to 138.9 million m3 and 0.997 million m3 for gas and water, respectively. PMID:29563647
Neural correlates of lower limbs proprioception: An fMRI study of foot position matching.
Iandolo, Riccardo; Bellini, Alessandro; Saiote, Catarina; Marre, Ilaria; Bommarito, Giulia; Oesingmann, Niels; Fleysher, Lazar; Mancardi, Giovanni Luigi; Casadio, Maura; Inglese, Matilde
2018-05-01
Little is known about the neural correlates of lower limbs position sense, despite the impact that proprioceptive deficits have on everyday life activities, such as posture and gait control. We used fMRI to investigate in 30 healthy right-handed and right-footed subjects the regional distribution of brain activity during position matching tasks performed with the right dominant and the left nondominant foot. Along with the brain activation, we assessed the performance during both ipsilateral and contralateral matching tasks. Subjects had lower errors when matching was performed by the left nondominant foot. The fMRI analysis suggested that the significant regions responsible for position sense are in the right parietal and frontal cortex, providing a first characterization of the neural correlates of foot position matching. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Lee, HyeSun; Geisinger, Kurt F.
2016-01-01
The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…
Neural mechanisms of reinforcement learning in unmedicated patients with major depressive disorder.
Rothkirch, Marcus; Tonn, Jonas; Köhler, Stephan; Sterzer, Philipp
2017-04-01
According to current concepts, major depressive disorder is strongly related to dysfunctional neural processing of motivational information, entailing impairments in reinforcement learning. While computational modelling can reveal the precise nature of neural learning signals, it has not been used to study learning-related neural dysfunctions in unmedicated patients with major depressive disorder so far. We thus aimed at comparing the neural coding of reward and punishment prediction errors, representing indicators of neural learning-related processes, between unmedicated patients with major depressive disorder and healthy participants. To this end, a group of unmedicated patients with major depressive disorder (n = 28) and a group of age- and sex-matched healthy control participants (n = 30) completed an instrumental learning task involving monetary gains and losses during functional magnetic resonance imaging. The two groups did not differ in their learning performance. Patients and control participants showed the same level of prediction error-related activity in the ventral striatum and the anterior insula. In contrast, neural coding of reward prediction errors in the medial orbitofrontal cortex was reduced in patients. Moreover, neural reward prediction error signals in the medial orbitofrontal cortex and ventral striatum showed negative correlations with anhedonia severity. Using a standard instrumental learning paradigm we found no evidence for an overall impairment of reinforcement learning in medication-free patients with major depressive disorder. Importantly, however, the attenuated neural coding of reward in the medial orbitofrontal cortex and the relation between anhedonia and reduced reward prediction error-signalling in the medial orbitofrontal cortex and ventral striatum likely reflect an impairment in experiencing pleasure from rewarding events as a key mechanism of anhedonia in major depressive disorder. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The FIM-iHYCOM Model in SubX: Evaluation of Subseasonal Errors and Variability
NASA Astrophysics Data System (ADS)
Green, B.; Sun, S.; Benjamin, S.; Grell, G. A.; Bleck, R.
2017-12-01
NOAA/ESRL/GSD has produced both real-time and retrospective forecasts for the Subseasonal Experiment (SubX) using the FIM-iHYCOM model. FIM-iHYCOM couples the atmospheric Flow-following finite volume Icosahedral Model (FIM) to an icosahedral-grid version of the Hybrid Coordinate Ocean Model (HYCOM). This coupled model is unique in terms of its grid structure: in the horizontal, the icosahedral meshes are perfectly matched for FIM and iHYCOM, eliminating the need for a flux interpolator; in the vertical, both models use adaptive arbitrary Lagrangian-Eulerian hybrid coordinates. For SubX, FIM-iHYCOM initializes four time-lagged ensemble members around each Wednesday, which are integrated forward to provide 32-day forecasts. While it has already been shown that this model has similar predictive skill as NOAA's operational CFSv2 in terms of the RMM index, FIM-iHYCOM is still fairly new and thus its overall performance needs to be thoroughly evaluated. To that end, this study examines model errors as a function of forecast lead week (1-4) - i.e., model drift - for key variables including 2-m temperature, precipitation, and SST. Errors are evaluated against two reanalysis products: CFSR, from which FIM-iHYCOM initial conditions are derived, and the quasi-independent ERA-Interim. The week 4 error magnitudes are similar between FIM-iHYCOM and CFSv2, albeit with different spatial distributions. Also, intraseasonal variability as simulated in these two models will be compared with reanalyses. The impact of hindcast frequency (4 times per week, once per week, or once per day) on the model climatology is also examined to determine the implications for systematic error correction in FIM-iHYCOM.
Teaching identity matching of braille characters to beginning braille readers.
Toussaint, Karen A; Scheithauer, Mindy C; Tiger, Jeffrey H; Saunders, Kathryn J
2017-04-01
We taught three children with visual impairments to make tactile discriminations of the braille alphabet within a matching-to-sample format. That is, we presented participants with a braille character as a sample stimulus, and they selected the matching stimulus from a three-comparison array. In order to minimize participant errors, we initially arranged braille characters into training sets in which there was a maximum difference in the number of dots comprising the target and nontarget comparison stimuli. As participants mastered these discriminations, we increased the similarity between target and nontarget comparisons (i.e., an approximation of stimulus fading). All three participants' accuracy systematically increased following the introduction of this identity-matching procedure. © 2017 Society for the Experimental Analysis of Behavior.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Effect of acoustic similarity on short-term auditory memory in the monkey
Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo
2013-01-01
Recent evidence suggests that the monkey’s short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey’s performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample’s trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. PMID:23376550
Effect of acoustic similarity on short-term auditory memory in the monkey.
Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo
2013-04-01
Recent evidence suggests that the monkey's short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey's performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample's trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavoie, Caroline; Higgins, Jane; Bissonnette, Jean-Pierre
2012-12-01
Purpose: To compare the relative accuracy of 2 image guided radiation therapy methods using carina vs spine as landmarks and then to identify which landmark is superior relative to tumor coverage. Methods and Materials: For 98 lung patients, 2596 daily image-guidance cone-beam computed tomography scans were analyzed. Tattoos were used for initial patient alignment; then, spine and carina registrations were performed independently. A separate analysis assessed the adequacy of gross tumor volume, internal target volume, and planning target volume coverage on cone-beam computed tomography using the initial, middle, and final fractions of radiation therapy. Coverage was recorded for primary tumormore » (T), nodes (N), and combined target (T+N). Three scenarios were compared: tattoos alignment, spine registration, and carina registration. Results: Spine and carina registrations identified setup errors {>=}5 mm in 35% and 46% of fractions, respectively. The mean vector difference between spine and carina matching had a magnitude of 3.3 mm. Spine and carina improved combined target coverage, compared with tattoos, in 50% and 34% (spine) to 54% and 46% (carina) of the first and final fractions, respectively. Carina matching showed greater combined target coverage in 17% and 23% of fractions for the first and final fractions, respectively; with spine matching, this was only observed in 4% (first) and 6% (final) of fractions. Carina matching provided superior nodes coverage at the end of radiation compared with spine matching (P=.0006), without compromising primary tumor coverage. Conclusion: Frequent patient setup errors occur in locally advanced lung cancer patients. Spine and carina registrations improved combined target coverage throughout the treatment course, but carina matching provided superior combined target coverage.« less
Practical relevance of pattern uniqueness in forensic science.
Jayaprakash, Paul T
2013-09-10
Uniqueness being unprovable, it has recently been argued that individualization in forensic science is irrelevant and, probability, as applied for DNA profiles, should be applied for all identifications. Critiques against uniqueness have omitted physical matching, a realistic and tangible individualization that supports uniqueness. Describing case examples illustrating pattern matches including physical matching, it is indicated that individualizations are practically relevant for forensic science as they establish facts on a definitive basis providing firm leads benefitting criminal investigation. As a tenet of forensic identification, uniqueness forms a fundamental paradigm relevant for individualization. Evidence on the indeterministic and stochastic causal pathways of characteristics in patterns available in the related fields of science sufficiently supports the proposition of uniqueness. Characteristics involved in physical matching and matching achieved in patterned evidence existing in the state of nature are not events amenable for counting; instead these are ensemble of visible units occupying the entire pattern area stretching the probability of re-occurrence of a verisimilitude pattern into infinity offering epistemic support to uniqueness. Observational methods are as respectable as instrumental or statistical methods since they are capable of generating results that are tangible and obviously valid as in physical matching. Applying the probabilistic interpretation used for DNA profiles to the other patterns would be unbefitting since these two are disparate, the causal pathways of the events, the loci, in the manipulated DNA profiles being determinable. While uniqueness enables individualizations, it does not vouch for eliminating errors. Instead of dismissing uniqueness and individualization, accepting errors as human or system failures and seeking remedial measures would benefit forensic science practice and criminal investigation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Spirality: A Noval Way to Measure Spiral Arm Pitch Angle
NASA Astrophysics Data System (ADS)
Shields, Douglas W.; Boe, Benjamin; Henderson, Casey L.; Hartley, Matthew; Davis, Benjamin L.; Pour Imani, Hamed; Kennefick, Daniel; Kennefick, Julia D.
2015-01-01
We present the MATLAB code Spirality, a novel method for measuring spiral arm pitch angles by fitting galaxy images to spiral templates of known pitch. For a given pitch angle template, the mean pixel value is found along each of typically 1000 spiral axes. The fitting function, which shows a local maximum at the best-fit pitch angle, is the variance of these means. Error bars are found by varying the inner radius of the measurement annulus and finding the standard deviation of the best-fit pitches. Computation time is typically on the order of 2 minutes per galaxy, assuming at least 8 GB of working memory. We tested the code using 128 synthetic spiral images of known pitch. These spirals varied in the number of spiral arms, pitch angle, degree of logarithmicity, radius, SNR, inclination angle, bar length, and bulge radius. A correct result is defined as a result that matches the true pitch within the error bars, with error bars no greater than ±7°. For the non-logarithmic spiral sample, the correct answer is similarly defined, with the mean pitch as function of radius in place of the true pitch. For all synthetic spirals, correct results were obtained so long as SNR > 0.25, the bar length was no more than 60% of the spiral's diameter (when the bar was included in the measurement), the input center of the spiral was no more than 6% of the spiral radius away from the true center, and the inclination angle was no more than 30°. The synthetic spirals were not deprojected prior to measurement. The code produced the correct result for all barred spirals when the measurement annulus was placed outside the bar. Additionally, we compared the code's results against 2DFFT results for 203 visually selected spiral galaxies in GOODS North and South. Among the entire sample, Spirality's error bars overlapped 2DFFT's error bars 64% of the time. For those galaxies in which Source code is available by email request from the primary author.
Correcting systematic bias and instrument measurement drift with mzRefinery
Gibbons, Bryson C.; Chambers, Matthew C.; Monroe, Matthew E.; ...
2015-08-04
Systematic bias in mass measurement adversely affects data quality and negates the advantages of high precision instruments. We introduce the mzRefinery tool into the ProteoWizard package for calibration of mass spectrometry data files. Using confident peptide spectrum matches, three different calibration methods are explored and the optimal transform function is chosen. After calibration, systematic bias is removed and the mass measurement errors are centered at zero ppm. Because it is part of the ProteoWizard package, mzRefinery can read and write a wide variety of file formats. In conclusion, we report on availability; the mzRefinery tool is part of msConvert, availablemore » with the ProteoWizard open source package at http://proteowizard.sourceforge.net/« less
Simulating a transmon implementation of the surface code, Part I
NASA Astrophysics Data System (ADS)
Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo
Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Subject-specific finite-element modeling of normal aortic valve biomechanics from 3D+t TEE images.
Labrosse, Michel R; Beller, Carsten J; Boodhwani, Munir; Hudson, Christopher; Sohmer, Benjamin
2015-02-01
In the past decades, developments in transesophageal echocardiography (TEE) have opened new horizons in reconstructive surgery of the aortic valve (AV), whereby corrections are made to normalize the geometry and function of the valve, and effectively treat leaks. To the best of our knowledge, we propose the first integrated framework to process subject-specific 3D+t TEE AV data, determine age-matched material properties for the aortic and leaflet tissues, build a finite element model of the unpressurized AV, and simulate the AV function throughout a cardiac cycle. For geometric reconstruction purposes, dedicated software was created to acquire the 3-D coordinates of 21 anatomical landmarks of the AV apparatus in a systematic fashion. Measurements from ten 3D+t TEE datasets of normal AVs were assessed for inter- and intra-observer variability. These tests demonstrated mean measurement errors well within the acceptable range. Simulation of a complete cardiac cycle was successful for all ten valves and validated the novel schemes introduced to evaluate age-matched material properties and iteratively scale the unpressurized dimensions of the valves such that, given the determined material properties, the dimensions measured in vivo closely matched those simulated in late diastole. The leaflet coaptation area, describing the quality of the sealing of the valve, was measured directly from the medical images and was also obtained from the simulations; both approaches correlated well. The mechanical stress values obtained from the simulations may be interpreted in a comparative sense whereby higher values are indicative of higher risk of tearing and/or development of calcification. Copyright © 2014 Elsevier B.V. All rights reserved.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Age-Related Changes in Bimanual Instrument Playing with Rhythmic Cueing
Kim, Soo Ji; Cho, Sung-Rae; Yoo, Ga Eul
2017-01-01
Deficits in bimanual coordination of older adults have been demonstrated to significantly limit their functioning in daily life. As a bimanual sensorimotor task, instrument playing has great potential for motor and cognitive training in advanced age. While the process of matching a person’s repetitive movements to auditory rhythmic cueing during instrument playing was documented to involve motor and attentional control, investigation into whether the level of cognitive functioning influences the ability to rhythmically coordinate movement to an external beat in older populations is relatively limited. Therefore, the current study aimed to examine how timing accuracy during bimanual instrument playing with rhythmic cueing differed depending on the degree of participants’ cognitive aging. Twenty one young adults, 20 healthy older adults, and 17 older adults with mild dementia participated in this study. Each participant tapped an electronic drum in time to the rhythmic cueing provided using both hands simultaneously and in alternation. During bimanual instrument playing with rhythmic cueing, mean and variability of synchronization errors were measured and compared across the groups and the tempo of cueing during each type of tapping task. Correlations of such timing parameters with cognitive measures were also analyzed. The results showed that the group factor resulted in significant differences in the synchronization errors-related parameters. During bimanual tapping tasks, cognitive decline resulted in differences in synchronization errors between younger adults and older adults with mild dimentia. Also, in terms of variability of synchronization errors, younger adults showed significant differences in maintaining timing performance from older adults with and without mild dementia, which may be attributed to decreased processing time for bimanual coordination due to aging. Significant correlations were observed between variability of synchronization errors and performance of cognitive tasks involving executive control and cognitive flexibility when asked for bimanual coordination in response to external timing cues at adjusted tempi. Also, significant correlations with cognitive measures were more prevalent in variability of synchronization errors during alternative tapping compared to simultaneous tapping. The current study supports that bimanual tapping may be predictive of cognitive processing of older adults. Also, tempo and type of movement required for instrument playing both involve cognitive and motor loads at different levels, and such variables could be important factors for determining the complexity of the task and the involved task requirements for interventions using instrument playing. PMID:29085309
Gudra, Tadeusz; Opieliński, Krzysztof J
2002-05-01
In different solutions of ultrasonic transducers radiating acoustic energy into the air there occurs the problem of the proper selection of the acoustic impedance of one or more matching layers. The goal of this work was a computer analysis of the influence of acoustic impedance on the transfer function of piezoceramic transducers equipped with matching layers. Cases of resonance and non-resonance matching impedance in relation to the transfer function and the energy transmission coefficient for solid state-air systems were analysed. With stable thickness of matching layers the required shape of the transfer function can be obtained through proper choice of acoustic impedance were built (e.g. maximal flat function). The proper choice of acoustic impedance requires an elaboration of precise methods of synthesis of matching systems. Using the known matching criteria (Chebyshev's, DeSilets', Souquet's), the transfer function characteristics of transducers equipped with one, two, and three matching layers as well as the optimisation methods of the energy transmission coefficient were presented. The influence of the backside load of the transducer on the shape of transfer function was also analysed. The calculation results of this function for different loads of the transducer backside without and with the different matching layers were presented. The proper load selection allows us to obtain the desired shape of the transfer function, which determines the pulse shape generated by the transducer.
Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S
2010-08-01
The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.
Zheng, Zane Z.; Munhall, Kevin G; Johnsrude, Ingrid S
2009-01-01
The fluency and reliability of speech production suggests a mechanism that links motor commands and sensory feedback. Here, we examine the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not, and examining the overlap with the network recruited during passive listening to speech sounds. We use real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word (‘Ted’) and either heard this clearly, or heard voice-gated masking noise. We compare this to when they listened to yoked stimuli (identical recordings of ‘Ted’ or noise) without speaking. Activity along the superior temporal sulcus (STS) and superior temporal gyrus (STG) bilaterally was significantly greater if the auditory stimulus was a) processed as the auditory concomitant of speaking and b) did not match the predicted outcome (noise). The network exhibiting this Feedback type by Production/Perception interaction includes an STG/MTG region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts, and that processes an error signal in speech-sensitive regions when this and the sensory data do not match. PMID:19642886
Moran-Santa Maria, M M; Baker, N L; McRae-Clark, A L; Prisciandaro, J J; Brady, K T
2016-05-01
Deficits in executive function have been associated with risk for relapse. Data from previous studies suggest that relapse may be triggered by stress and drug-paired cues and that there are significant sex differences in the magnitude of these responses. The aim of this study was to examine the impact of the pharmacological stressor and alpha-2 adrenergic receptor antagonist yohimbine and cocaine cues on executive function in cocaine-dependent men and women. In a double-blind placebo controlled cross-over study, cocaine-dependent men (n=12), cocaine-dependent women (n=27), control men (n=31) and control women (n=25) received either yohimbine or placebo prior to two cocaine cue exposure sessions. Participants performed the Connors' Continuous Performance Test II prior to medication/placebo administration and immediately after each cue exposure session Healthy controls had a significant increase in commission errors under the yohimbine condition [RR (95% CI)=1.1 (1.0-1.3), χ(2)1=2.0, p=0.050]. Cocaine-dependent individuals exhibited a significant decrease in omission errors under the yohimbine condition [RR (95% CI)=0.6 (0.4-0.8), χ(2)1=8.6, p=0.003]. Cocaine-dependent women had more omission errors as compared to cocaine-dependent men regardless of treatment [RR (95% CI)=7.2 (3.6-14.7), χ(2)1=30.1, p<0.001]. Cocaine-dependent women exhibited a slower hit reaction time as compared to cocaine-dependent men [Female 354 ± 13 vs. Male 415 ± 14; t89=2.6, p=0.012]. These data add to a growing literature demonstrating significant sex differences in behaviors associated with relapse in cocaine-dependent individuals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fiber-Optic Linear Displacement Sensor Based On Matched Interference Filters
NASA Astrophysics Data System (ADS)
Fuhr, Peter L.; Feener, Heidi C.; Spillman, William B.
1990-02-01
A fiber optic linear displacement sensor has been developed in which a pair of matched interference filters are used to encode linear position on a broadband optical signal as relative intensity variations. As the filters are displaced, the optical beam illuminates varying amounts of each filter. Determination of the relative intensities at each filter pairs' passband is based on measurements acquired with matching filters and photodetectors. Source power variation induced errors are minimized by basing determination of linear position on signal Visibility. A theoretical prediction of the sensor's performance is developed and compared with experiments performed in the near IR spectral region using large core multimode optical fiber.
Marcos-Vidal, Luis; Martínez-García, Magdalena; Pretus, Clara; Garcia-Garcia, David; Martínez, Kenia; Janssen, Joost; Vilarroya, Oscar; Castellanos, Francisco X; Desco, Manuel; Sepulcre, Jorge; Carmona, Susanna
2018-06-01
Previous studies have associated Attention-Deficit/Hyperactivity Disorder (ADHD) with a maturational lag of brain functional networks. Functional connectivity of the human brain changes from primarily local to more distant connectivity patterns during typical development. Under the maturational lag hypothesis, we expect children with ADHD to exhibit increased local connectivity and decreased distant connectivity compared with neurotypically developing (ND) children. We applied a graph-theory method to compute local and distant connectivity levels and cross-sectionally compared them in a sample of 120 children with ADHD and 120 age-matched ND children (age range = 7-17 years). In addition, we measured if potential group differences in local and distant connectivity were stable across the age range considered. Finally, we assessed the clinical relevance of observed group differences by correlating the connectivity levels and ADHD symptoms severity separately for each group. Children with ADHD exhibited more local connectivity than age-matched ND children in multiple brain regions, mainly overlapping with default mode, fronto-parietal and ventral attentional functional networks (p < .05- threshold free-cluster enhancement-family-wise error). We detected an atypical developmental pattern of local connectivity in somatomotor regions, that is, decreases with age in ND children, and increases with age in children with ADHD. Furthermore, local connectivity within somatomotor areas correlated positively with clinical severity of ADHD symptoms, both in ADHD and ND children. Results suggest an immature functional state of multiple brain networks in children with ADHD. Whereas the ADHD diagnosis is associated with the integrity of the system comprising the fronto-parietal, default mode and ventral attentional networks, the severity of clinical symptoms is related to atypical functional connectivity within somatomotor areas. Additionally, our findings are in line with the view of ADHD as a disorder of deviated maturational trajectories, mainly affecting somatomotor areas, rather than delays that normalize with age. © 2018 Wiley Periodicals, Inc.
SU-E-J-15: A Patient-Centered Scheme to Mitigate Impacts of Treatment Setup Error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; Southern Medical University, Guangzhou; Tian, Z
2014-06-01
Purpose: Current Intensity Modulated Radiation Therapy (IMRT) is plan-centered. At each treatment fraction, we position the patient to match the setup in treatment plan. Inaccurate setup can compromise delivered dose distribution, and hence leading to suboptimal treatments. Moreover, current setup approach via couch shift under image guidance can correct translational errors, while rotational and deformation errors are hard to address. To overcome these problems, we propose in this abstract a patient-centered scheme to mitigate impacts of treatment setup errors. Methods: In the patient-centered scheme, we first position the patient on the couch approximately matching the planned-setup. Our Supercomputing Online Replanningmore » Environment (SCORE) is then employed to design an optimal treatment plan based on the daily patient geometry. It hence mitigates the impacts of treatment setup error and reduces the requirements on setup accuracy. We have conducted simulations studies in 10 head-and-neck (HN) patients to investigate the feasibility of this scheme. Rotational and deformation setup errors were simulated. Specifically, 1, 3, 5, 7 degrees of rotations were put on pitch, roll, and yaw directions; deformation errors were simulated by splitting neck movements into four basic types: rotation, lateral bending, flexion and extension. Setup variation ranges are based on observed numbers in previous studies. Dosimetric impacts of our scheme were evaluated on PTVs and OARs in comparison with original plan dose with original geometry and original plan recalculated dose with new setup geometries. Results: With conventional plan-centered approach, setup error could lead to significant PTV D99 decrease (−0.25∼+32.42%) and contralateral-parotid Dmean increase (−35.09∼+42.90%). The patientcentered approach is effective in mitigating such impacts to 0∼+0.20% and −0.03∼+5.01%, respectively. Computation time is <128 s. Conclusion: Patient-centered scheme is proposed to mitigate setup error impacts using replanning. Its superiority in terms of dosimetric impacts and feasibility has been shown through simulation studies on HN cases.« less
Rational approximations of f(R) cosmography through Pad'e polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.
White, Stuart F.; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R.; Grillon, Christian; Blair, Karina S.
2017-01-01
Objective Deficits in reinforcement-based decision-making have been reported in Generalized Anxiety Disorder. However, the pathophysiology of these deficits is largely unknown, extant studies have mainly examined youth and the integrity of core functional processes underpinning decision-making remain undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE: the difference between received and expected reinforcement) is disrupted in Generalized Anxiety Disorder. The current study addresses these issues in adults with the disorder. Methods Forty-six un-medicated individuals with Generalized Anxiety Disorder and 32 healthy controls group-matched on IQ, gender and age, completed a passive avoidance task while undergoing functional MRI. Results Behaviorally, individuals with Generalized Anxiety Disorder showed impaired reinforcement-based decision-making. Imaging results revealed that during feedback, individuals with Generalized Anxiety Disorder relative to healthy controls showed a reduced correlation between PE and activity within ventromedial prefrontal cortex, ventral striatum and other structures implicated in decision-making. In addition, individuals with Generalized Anxiety Disorder relative to healthy participants showed a reduced correlation between punishment, but not reward, PEs and activity within bilateral lentiform nucleus/putamen. Conclusions This is the first study to identify computational impairments during decision-making in Generalized Anxiety Disorder. PE signaling is significantly disrupted in individuals with the disorder and may underpin the decision-making deficits observed in patients with GAD. PMID:27631963
Detection of digital FSK using a phase-locked loop
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Simon, M. K.
1975-01-01
A theory is presented for the design of a digital FSK receiver which employs a phase-locked loop to set up the desired matched filter as the arriving signal frequency switches. The developed mathematical model makes it possible to establish the error probability performance of systems which employ a class of digital FM modulations. The noise mechanism which accounts for decision errors is modeled on the basis of the Meyr distribution and renewal Markov process theory.
NASA Technical Reports Server (NTRS)
Seasholtz, R. G.
1977-01-01
A laser Doppler velocimeter (LDV) built for use in the Lewis Research Center's turbine stator cascade facilities is described. The signal processing and self contained data processing are based on a computing counter. A procedure is given for mode matching the laser to the probe volume. An analysis is presented of biasing errors that were observed in turbulent flow when the mean flow was not normal to the fringes.
Feature instructions improve face-matching accuracy
Bindemann, Markus
2018-01-01
Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
Daigle, Daniel; Costerg, Agnès; Plisson, Anne; Ruberto, Noémia; Varin, Joëlle
2016-05-01
For children with dyslexia, learning to write constitutes a great challenge. There has been consensus that the explanation for these learners' delay is related to a phonological deficit. Results from studies designed to describe dyslexic children's spelling errors are not always as clear concerning the role of phonological processes as those found in reading studies. In irregular languages like French, spelling abilities involve other processes than phonological processes. The main goal of this study was to describe the relative contribution of these other processes in dyslexic children's spelling ability. In total, 32 francophone dyslexic children with a mean age of 11.4 years were compared with 24 reading-age matched controls (RA) and 24 chronological-age matched controls (CA). All had to write a text that was analysed at the graphemic level. All errors were classified as either phonological, morphological, visual-orthographic or lexical. Results indicated that dyslexic children's spelling ability lagged behind not only that of the CA group but also of the RA group. Because the majority of errors, in all groups, could not be explained by inefficiency of phonological processing, the importance of visual knowledge/processes will be discussed as a complementary explanation of dyslexic children's delay in writing. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Solar maximum mission fine pointing sun sensor dawn and dusk errors flight data and model analysis
NASA Technical Reports Server (NTRS)
Kulp, D. R.
1988-01-01
SMM flight system control errors occurring at spacecraft dawn and dusk are analyzed. The errors are associated with the fine pointing sun sensor (FPSS), which is a primary component of the SMM attitude control system. It is shown that the source of the FPSS dawn/dusk distortion is the incomplete masking of sunlight reflected off the earth by the optical baffle covering the FPSS sensor heads onboard the SMM during periods of orbit dawn and dusk. For the most part, the modeled behavior of the FPSS under dawn and dusk lighting conditions matches the observed behavior in the SMM flight data.
2014-01-01
Background The combination of single-switch access technology and scanning is the most promising means of augmentative and alternative communication for many children with severe physical disabilities. However, the physical impairment of the child and the technology’s limited ability to interpret the child’s intentions often lead to false positives and negatives (corresponding to accidental and missed selections, respectively) occurring at rates that frustrate the user and preclude functional communication. Multiple psychophysiological studies have associated cardiac deceleration and increased phasic electrodermal activity with self-realization of errors among able-bodied individuals. Thus, physiological measurements have potential utility at enhancing single-switch access, provided that such prototypical autonomic responses exist in persons with profound disabilities. Methods The present case series investigated the autonomic responses of three pediatric single-switch users with severe spastic quadriplegic cerebral palsy, in the context of a single-switch letter matching activity. Each participant exhibited distinct autonomic responses to activity engagement. Results Our analysis confirmed the presence of the autonomic response pattern of cardiac deceleration and increased phasic electrodermal activity following true positives, false positives and false negatives errors, but not subsequent to true negative outcomes. Conclusions These findings suggest that there may be merit in complementing single-switch input with autonomic measurements to improve augmentative and alternative communications for pediatric access technology users. PMID:24607065
Leung, Brian; Chau, Tom
2014-03-08
The combination of single-switch access technology and scanning is the most promising means of augmentative and alternative communication for many children with severe physical disabilities. However, the physical impairment of the child and the technology's limited ability to interpret the child's intentions often lead to false positives and negatives (corresponding to accidental and missed selections, respectively) occurring at rates that frustrate the user and preclude functional communication. Multiple psychophysiological studies have associated cardiac deceleration and increased phasic electrodermal activity with self-realization of errors among able-bodied individuals. Thus, physiological measurements have potential utility at enhancing single-switch access, provided that such prototypical autonomic responses exist in persons with profound disabilities. The present case series investigated the autonomic responses of three pediatric single-switch users with severe spastic quadriplegic cerebral palsy, in the context of a single-switch letter matching activity. Each participant exhibited distinct autonomic responses to activity engagement. Our analysis confirmed the presence of the autonomic response pattern of cardiac deceleration and increased phasic electrodermal activity following true positives, false positives and false negatives errors, but not subsequent to true negative outcomes. These findings suggest that there may be merit in complementing single-switch input with autonomic measurements to improve augmentative and alternative communications for pediatric access technology users.
Grimm, Oliver; Heinz, Andreas; Walter, Henrik; Kirsch, Peter; Erk, Susanne; Haddad, Leila; Plichta, Michael M; Romanczuk-Seiferth, Nina; Pöhland, Lydia; Mohnke, Sebastian; Mühleisen, Thomas W; Mattheisen, Manuel; Witt, Stephanie H; Schäfer, Axel; Cichon, Sven; Nöthen, Markus; Rietschel, Marcella; Tost, Heike; Meyer-Lindenberg, Andreas
2014-05-01
Attenuated ventral striatal response during reward anticipation is a core feature of schizophrenia that is seen in prodromal, drug-naive, and chronic schizophrenic patients. Schizophrenia is highly heritable, raising the possibility that this phenotype is related to the genetic risk for the disorder. To examine a large sample of healthy first-degree relatives of schizophrenic patients and compare their neural responses to reward anticipation with those of carefully matched controls without a family psychiatric history. To further support the utility of this phenotype, we studied its test-retest reliability, its potential brain structural contributions, and the effects of a protective missense variant in neuregulin 1 (NRG1) linked to schizophrenia by meta-analysis (ie, rs10503929). Examination of a well-established monetary reward anticipation paradigm during functional magnetic resonance imaging at a university hospital; voxel-based morphometry; test-retest reliability analysis of striatal activations in an independent sample of 25 healthy participants scanned twice with the same task; and imaging genetics analysis of the control group. A total of 54 healthy first-degree relatives of schizophrenic patients and 80 controls matched for demographic, psychological, clinical, and task performance characteristics were studied. Blood oxygen level-dependent response during reward anticipation, analysis of intraclass correlations of functional contrasts, and associations between striatal gray matter volume and NRG1 genotype. Compared with controls, healthy first-degree relatives showed a highly significant decrease in ventral striatal activation during reward anticipation (familywise error-corrected P < .03 for multiple comparisons across the whole brain). Supplemental analyses confirmed that the identified systems-level functional phenotype is reliable (with intraclass correlation coefficients of 0.59-0.73), independent of local gray matter volume (with no corresponding group differences and no correlation to function, and with all uncorrected P values >.05), and affected by the NRG1 genotype (higher striatal responses in controls with the protective rs10503929 C allele; familywise error-corrected P < .03 for ventral striatal response). Healthy first-degree relatives of schizophrenic patients show altered striatal activation during reward anticipation in a directionality and localization consistent with prior patient findings. This provides evidence for a functional neural system mechanism related to familial risk. The phenotype can be assessed reliably, is independent of alterations in striatal structure, and is influenced by a schizophrenia candidate gene variant in NRG1. These data encourage us to further investigate the genetic and molecular contributions to this phenotype.
Guardia-Olmos, Joan; Zarabozo-Hurtado, Daniel; Peró-Cebollero, Maribe; Gudayol-Farré, Esteban; Gómez-Velázquez, Fabiola R; González-Garrido, Andrés
2017-12-04
The study of orthographic errors in a transparent language such as Spanish is an important topic in relation to writing acquisition because in Spanish it is common to write pseudohomophones as valid words. The main objective of the present study was to explore the possible differences in activation patterns in brain areas while processing pseudohomophone orthographic errors between participants with high (High Spelling Skills (HSS)) and low (Low Spelling Skills (LSS)) spelling orthographic abilities. We hypothesize that (a) the detection of orthographic errors will activate bilateral inferior frontal gyri, and that (b) this effect will be greater in the HSS group. Two groups of 12 Mexican participants, each matched by age, were formed based on their results in a group of spelling-related ad hoc tests: HSS and LSS groups. During the fMRI session, two experimental tasks were applied involving correct and pseudohomophone substitution of Spanish words. First, a spelling recognition task and second a letter searching task. The LSS group showed, as expected, a lower number of correct responses (F(1, 21) = 52.72, p <.001, η2 = .715) and higher reaction times compared to the HSS group for the spelling task (F(1, 21) = 60.03, p <.001, η2 = .741). However, this pattern was reversed when the participants were asked to decide on the presence of a vowel in the words, regardless of spelling. The fMRI data showed an engagement of the right inferior frontal gyrus in HSS group during the spelling task. However, temporal, frontal, and subcortical brain regions of the LSS group were activated during the same task.
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
Cortical brain development in nonpsychotic siblings of patients with childhood-onset schizophrenia.
Gogtay, Nitin; Greenstein, Deanna; Lenane, Marge; Clasen, Liv; Sharp, Wendy; Gochman, Pete; Butler, Philip; Evans, Alan; Rapoport, Judith
2007-07-01
Cortical gray matter (GM) loss is marked and progressive in childhood-onset schizophrenia (COS) during adolescence but becomes more circumscribed by early adulthood. Nonpsychotic siblings of COS probands could help evaluate whether the cortical GM abnormalities are familial/trait markers. To map cortical development in nonpsychotic siblings of COS probands. Using an automated measurement and prospectively acquired anatomical brain magnetic resonance images, we mapped cortical GM thickness in healthy full siblings (n = 52, 113 scans; age 8 through 28 years) of patients with COS, contrasting them with age-, sex-, and scan interval-matched healthy controls (n = 52, 108 scans). The false-discovery rate procedure was used to control for type I errors due to multiple comparisons. An ongoing COS study at the National Institute of Mental Health. Fifty-two healthy full siblings of patients with COS, aged 8 through 28 years, and 52 healthy controls. Longitudinal trajectories of cortical GM development in healthy siblings of patients with COS compared with matched healthy controls and exploratory measure of the relationship between developmental GM trajectories and the overall functioning as defined by the Global Assessment Scale (GAS) score. Younger, healthy siblings of patients with COS showed significant GM deficits in the left prefrontal and bilateral temporal cortices and smaller deficits in the right prefrontal and inferior parietal cortices compared with the controls. These cortical deficits in siblings disappeared by age 20 years and the process of deficit reduction correlated with overall functioning (GAS scores) at the last scan. Prefrontal and temporal GM loss in COS appears to be a familial/trait marker. Amelioration of regional GM deficits in healthy siblings was associated with higher global functioning (GAS scores), suggesting a relationship between brain plasticity and functional outcome for these nonpsychotic, nonspectrum siblings.
Line segment confidence region-based string matching method for map conflation
NASA Astrophysics Data System (ADS)
Huh, Yong; Yang, Sungchul; Ga, Chillo; Yu, Kiyun; Shi, Wenzhong
2013-04-01
In this paper, a method to detect corresponding point pairs between polygon object pairs with a string matching method based on a confidence region model of a line segment is proposed. The optimal point edit sequence to convert the contour of a target object into that of a reference object was found by the string matching method which minimizes its total error cost, and the corresponding point pairs were derived from the edit sequence. Because a significant amount of apparent positional discrepancies between corresponding objects are caused by spatial uncertainty and their confidence region models of line segments are therefore used in the above matching process, the proposed method obtained a high F-measure for finding matching pairs. We applied this method for built-up area polygon objects in a cadastral map and a topographical map. Regardless of their different mapping and representation rules and spatial uncertainties, the proposed method with a confidence level at 0.95 showed a matching result with an F-measure of 0.894.
Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, S.; Peter, M.
2017-05-01
In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
A mathematical model of medial consonant identification by cochlear implant users.
Svirsky, Mario A; Sagi, Elad; Meyer, Ted A; Kaiser, Adam R; Teoh, Su Wooi
2011-04-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feygelman, Vladimir; Department of Physics, University of Manitoba, Winnipeg, MB; Mandelzweig, Yuri
2015-01-15
Matching electron beams without secondary collimators (applicators) were used for treatment of extensive, recurrent chest-wall carcinoma. Due to the wide penumbra of such beams, the homogeneity of the dose distribution at and around the junction point is clinically acceptable and relatively insensitive to positional errors. Specifically, dose around the junction point is homogeneous to within ±4% as calculated from beam profiles, while the positional error of 1 cm leaves this number essentially unchanged. The experimental isodose distribution in an anthropomorphic phantom supports this conclusion. Two electron beams with wide penumbra were used to cover the desired treatment area with satisfactorymore » dose homogeneity. The technique is relatively simple yet clinically useful and can be considered a viable alternative for treatment of extensive chest-wall disease. The steps are suggested to make this technique more universal.« less
A mathematical model of medial consonant identification by cochlear implant users
Svirsky, Mario A.; Sagi, Elad; Meyer, Ted A.; Kaiser, Adam R.; Teoh, Su Wooi
2011-01-01
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects’ ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects’ consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech. PMID:21476674
The Higgs transverse momentum distribution at NNLL and its theoretical errors
Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun
2015-12-15
In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less
A comparison of 12 algorithms for matching on the propensity score.
Austin, Peter C
2014-03-15
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
A comparison of 12 algorithms for matching on the propensity score
Austin, Peter C
2014-01-01
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24123228
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Practical considerations for a second-order directional hearing aid microphone system
NASA Astrophysics Data System (ADS)
Thompson, Stephen C.
2003-04-01
First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.
Detection and Use of Load and Gage Output Repeats of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2017-01-01
Criteria are discussed that may be used for the detection of load and gage output repeats of wind tunnel strain-gage balance data. First, empirical thresholds are introduced that help determine if the loads or electrical outputs of a pair of balance calibration or check load data points match. A threshold of 0.01 percent of the load capacity is suggested for the identification of matching loads. Similarly, a threshold of 0.1 microV/V is recommended for the identification of matching electrical outputs. Two examples for the use of load and output repeats are discussed to illustrate benefits of the implementation of a repeat point detection algorithm in a balance data analysis software package. The first example uses the suggested load threshold to identify repeat data points that may be used to compute pure errors of the balance loads. This type of analysis may reveal hidden data quality issues that could potentially be avoided by making calibration process improvements. The second example uses the electrical output threshold for the identification of balance fouling. Data from the calibration of a six-component force balance is used to illustrate the calculation of the pure error of the balance loads.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
Fair, Damien A.; Choi, Alexander H.; Dosenbach, Yannic B.L.; Coalson, Rebecca S.; Miezin, Francis M.; Petersen, Steven E.; Schlaggar, Bradley L.
2009-01-01
Children with congenital left hemisphere damage due to perinatal stroke are capable of acquiring relatively normal language functions despite experiencing a cortical insult that in adults often leads to devastating lifetime disabilities. Although this observed phenomenon accepted, its neurobiological mechanisms are not well characterized. In this paper we examined the functional neuroanatomy of lexical processing in 13 children/adolescents with perinatal left hemispheric damage. In contrast to many previous perinatal infarct fMRI studies, we use an event-related design, which allowed us to isolate trial related activity and examine correct and error trials separately. Using both group and single subject analysis techniques we attempt to address several methodological factors that may contribute to some discrepancies in the perinatal lesion literature. These methodological factors include making direct statistical comparisons, using common stereotactic space, using both single-subject and group analyses, and accounting for performance differences. Our group analysis, investigating correct trial related activity (separately from error trials), showed very few statistical differences in the non-involved right hemisphere between patients and performance matched controls. The single subject analysis revealed atypical regional activation patterns in several patients; however, the location of these regions identified in individual patients often varied across subjects. These results are consistent with the idea that alternative functional organization of trial-related activity after left hemisphere lesions is in large part unique to the individual. In addition, reported differences between results obtained with event-related designs and blocked designs may suggest diverging organizing principles for sustained and trial-related activity after early childhood brain injuries. PMID:19819000
Impact of correcting visual impairment and low vision in deaf-mute students in Pune, India.
Gogate, Parikshit; Bhusan, Shashi; Ray, Shantanu; Shinde, Amit
2016-12-01
The aim of this study was to evaluate visual acuity and vision function before and after providing spectacles and low vision devices (LVDs) in deaf-mute students. Schools for deaf-mute in West Maharashtra. Hearing-impaired children in all special schools in Pune district underwent detailed visual acuity testing (with teachers' help), refraction, external ocular examination, and fundoscopy. Students with refractive errors and low vision were provided with spectacles and LVD. The LV Prasad-Functional Vision Questionnaire consisting of twenty items was administered to each subject before and after providing spectacles, LVDs. Wilcoxon matched-pairs signed-ranks test. 252/929 (27.1%) students had a refractive error. 794 (85.5%) were profound deaf. Two-hundred and fifty students were dispensed spectacles and LVDs. Mean LogMAR visual acuity before introduction of spectacles and LVDs were 0.33 ± 0.36 which improved to 0.058 (P < 0.0001) after intervention. It was found that difference in functional vision pre- and post-intervention was statistically significant (P < 0.0001) for questions 1-19. The most commonly reported difficulties were for performing distance task like reading the bus destination (58.7%), making out the bus number (51.1%), copying from blackboard (47.7%), and seeing whether somebody is waving hand from across the road (45.5%). In response to question number 20, 57.4% of students felt that their vision was much worse than their friend's vision, which was reduced to 17.6% after dispensing spectacles and LVDs. Spectacle and LVD reduced visual impairment and improved vision function in deaf-mute students, augmenting their ability to negotiate in and out of school.
Fair, Damien A; Choi, Alexander H; Dosenbach, Yannic B L; Coalson, Rebecca S; Miezin, Francis M; Petersen, Steven E; Schlaggar, Bradley L
2010-08-01
Children with congenital left hemisphere damage due to perinatal stroke are capable of acquiring relatively normal language functions despite experiencing a cortical insult that in adults often leads to devastating lifetime disabilities. Although this observed phenomenon is accepted, its neurobiological mechanisms are not well characterized. In this paper we examined the functional neuroanatomy of lexical processing in 13 children/adolescents with perinatal left hemispheric damage. In contrast to many previous perinatal infarct fMRI studies, we used an event-related design, which allowed us to isolate trial-related activity and examine correct and error trials separately. Using both group and single subject analysis techniques we attempt to address several methodological factors that may contribute to some discrepancies in the perinatal lesion literature. These methodological factors include making direct statistical comparisons, using common stereotactic space, using both single subject and group analyses, and accounting for performance differences. Our group analysis, investigating correct trial-related activity (separately from error trials), showed very few statistical differences in the non-involved right hemisphere between patients and performance matched controls. The single subject analysis revealed atypical regional activation patterns in several patients; however, the location of these regions identified in individual patients often varied across subjects. These results are consistent with the idea that alternative functional organization of trial-related activity after left hemisphere lesions is in large part unique to the individual. In addition, reported differences between results obtained with event-related designs and blocked designs may suggest diverging organizing principles for sustained and trial-related activity after early childhood brain injuries. 2009 Elsevier Inc. All rights reserved.
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.
Hromadka, T.V.; ,
1985-01-01
Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Balter, Peter; Morice, Rodolfo C.; Choi, Bum; Kudchadker, Rajat J.; Bucci, Kara; Chang, Joe Y.; Dong, Lei; Tucker, Susan; Vedam, Sastry; Briere, Tina; Starkschall, George
2008-01-01
This study aimed to validate and implement a methodology in which fiducials implanted in the periphery of lung tumors can be used to reduce uncertainties in tumor location. Alignment software that matches marker positions on two‐dimensional (2D) kilovoltage portal images to positions on three‐dimensional (3D) computed tomography data sets was validated using static and moving phantoms. This software also was used to reduce uncertainties in tumor location in a patient with fiducials implanted in the periphery of a lung tumor. Alignment of fiducial locations in orthogonal projection images with corresponding fiducial locations in 3D data sets can position both static and moving phantoms with an accuracy of 1 mm. In a patient, alignment based on fiducial locations reduced systematic errors in the left–right direction by 3 mm and random errors by 2 mm, and random errors in the superior–inferior direction by 3 mm as measured by anterior–posterior cine images. Software that matches fiducial markers on 2D and 3D images is effective for aligning both static and moving fiducials before treatment and can be implemented to reduce patient setup uncertainties. PACS number: 81.40.Wx
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
A statistical investigation into the stability of iris recognition in diverse population sets
NASA Astrophysics Data System (ADS)
Howard, John J.; Etter, Delores M.
2014-05-01
Iris recognition is increasingly being deployed on population wide scales for important applications such as border security, social service administration, criminal identification and general population management. The error rates for this incredibly accurate form of biometric identification are established using well known, laboratory quality datasets. However, it is has long been acknowledged in biometric theory that not all individuals have the same likelihood of being correctly serviced by a biometric system. Typically, techniques for identifying clients that are likely to experience a false non-match or a false match error are carried out on a per-subject basis. This research makes the novel hypothesis that certain ethnical denominations are more or less likely to experience a biometric error. Through established statistical techniques, we demonstrate this hypothesis to be true and document the notable effect that the ethnicity of the client has on iris similarity scores. Understanding the expected impact of ethnical diversity on iris recognition accuracy is crucial to the future success of this technology as it is deployed in areas where the target population consists of clientele from a range of geographic backgrounds, such as border crossings and immigration check points.
The impact of acquisition angle differences on three-dimensional quantitative coronary angiography.
Tu, Shengxian; Holm, Niels R; Koning, Gerhard; Maeng, Michael; Reiber, Johan H C
2011-08-01
Three-dimensional (3D) quantitative coronary angiography (QCA) requires two angiographic views to restore vessel dimensions. This study investigated the impact of acquisition angle differences (AADs) of the two angiographic views on the assessed dimensions by 3D QCA. X-ray angiograms of an assembled brass phantom with different types of straight lesions were recorded at multiple angiographic projections. The projections were randomly matched as pairs and 3D QCA was performed in those pairs with AAD larger than 25°. The lesion length and diameter stenosis in three different lesions, a circular concentric severe lesion (A), a circular concentric moderate lesion (B), and a circular eccentric moderate lesion (C), were measured by 3D QCA. The acquisition protocol was repeated for a silicone bifurcation phantom, and the bifurcation angles and bifurcation core volume were measured by 3D QCA. The measurements were compared with the true dimensions if applicable and their correlation with AAD was studied. 50 matched pairs of angiographic views were analyzed for the brass phantom. The average value of AAD was 48.0 ± 14.1°. The percent diameter stenosis was slightly overestimated by 3D QCA for all lesions: A (error 1.2 ± 0.9%, P < 0.001); B (error 0.6 ± 0.5%, P < 0.001); C (error 1.1 ± 0.6%, P < 0.001). The correlation of the measurements with AAD was only significant for lesion A (R(2) = 0.151, P = 0.005). The lesion length was slightly overestimated by 3D QCA for lesion A (error 0.06 ± 0.18 mm, P = 0.026), but well assessed for lesion B (error -0.00 ± 0.16 mm, P = 0.950) and lesion C (error -0.01 ± 0.18 mm, P = 0.585). The correlation of the measurements with AAD was not significant for any lesion. Forty matched pairs of angiographic views were analyzed for the bifurcation phantom. The average value of AAD was 49.1 ± 15.4°. 3D QCA slightly overestimated the proximal angle (error 0.4 ± 1.1°, P = 0.046) and the distal angle (error 1.5 ± 1.3°, P < 0.001). The correlation with AAD was only significant for the distal angle (R(2) = 0.256, P = 0.001). The correlation of bifurcation core volume measurements with AAD was not significant (P = 0.750). Of the two aforementioned measurements with significant correlation with AAD, the errors tended to increase as AAD became larger. 3D QCA can be used to reliably assess vessel dimensions and bifurcation angles. Increasing the AAD of the two angiographic views does not increase accuracy and precision of 3D QCA for circular lesions or bifurcation dimensions. Copyright © 2011 Wiley-Liss, Inc.
Danielsson, Henrik; Rönnberg, Jerker; Leven, Anna; Andersson, Jan; Andersson, Karin; Lyxell, Björn
2006-06-01
Memory conjunction errors, that is, when a combination of two previously presented stimuli is erroneously recognized as previously having been seen, were investigated in a face recognition task with drawings and photographs in 23 individuals with learning disability, and 18 chronologically age-matched controls without learning disability. Compared to the controls, individuals with learning disability committed significantly more conjunction errors, feature errors (one old and one new component), but had lower correct recognition, when the results were adjusted for different guessing levels. A dual-processing approach gained more support than a binding approach. However, neither of the approaches could explain all of the results. The results of the learning disability group were only partly related to non-verbal intelligence.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
Lightness of an object under two illumination levels.
Zdravković, Suncica; Economou, Elias; Gilchrist, Alan
2006-01-01
Anchoring theory (Gilchrist et al, 1999 Psychological Review 106 795-834) predicts a wide range of lightness errors, including failures of constancy in multi-illumination scenes and a long list of well-known lightness illusions seen under homogeneous illumination. Lightness values are computed both locally and globally and then averaged together. Local values are computed within a given region of homogeneous illumination. Thus, for an object that extends through two different illumination levels, anchoring theory produces two values, one for the patch in brighter illumination and one for the patch in dimmer illumination. Observers can give matches for these patches separately, but they can also give a single match for the whole object. Anchoring theory in its current form is unable to predict these object matches. We report eight experiments in which we studied the relationship between patch matches and object matches. The results show that the object match represents a compromise between the match for the patch in the field of highest illumination and the patch in the largest field of illumination. These two principles are parallel to the rules found for anchoring lightness: highest luminance rule and area rule.
Hansson, Lisbeth; Khamis, Harry J
2008-12-01
Simulated data sets are used to evaluate conditional and unconditional maximum likelihood estimation in an individual case-control design with continuous covariates when there are different rates of excluded cases and different levels of other design parameters. The effectiveness of the estimation procedures is measured by method bias, variance of the estimators, root mean square error (RMSE) for logistic regression and the percentage of explained variation. Conditional estimation leads to higher RMSE than unconditional estimation in the presence of missing observations, especially for 1:1 matching. The RMSE is higher for the smaller stratum size, especially for the 1:1 matching. The percentage of explained variation appears to be insensitive to missing data, but is generally higher for the conditional estimation than for the unconditional estimation. It is particularly good for the 1:2 matching design. For minimizing RMSE, a high matching ratio is recommended; in this case, conditional and unconditional logistic regression models yield comparable levels of effectiveness. For maximizing the percentage of explained variation, the 1:2 matching design with the conditional logistic regression model is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Viswanathan, A; Cormack, R
2015-06-15
Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distancemore » matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.« less
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Qi; Wang, Tianyu
2018-04-01
This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.
Capodieci, Agnese; Martinussen, Rhonda
2017-01-01
Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227
Capodieci, Agnese; Martinussen, Rhonda
2017-01-01
Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.
NASA Astrophysics Data System (ADS)
Moon, Jeong-Eon; Park, Young-Je; Ryu, Joo-Hyung; Choi, Jong-Kuk; Ahn, Jae-Hyun; Min, Jee-Eun; Son, Young-Baek; Lee, Sun-Ju; Han, Hee-Jeong; Ahn, Yu-Hwan
2012-09-01
This paper provides initial validation results for GOCI-derived water products using match-ups between the satellite and ship-borne in situ data for the period of 2010-2011, with a focus on remote-sensing reflectance ( R rs ). Match-up data were constructed through systematic quality control of both in situ and GOCI data, and a manual inspection of associated GOCI images to identify pixels contaminated by cloud, land and inter-slot radiometric discrepancy. Efforts were made to process and quality check the in situ R rs data. This selection process yielded 32 optimal match-ups for the R rs spectra, chlorophyll a concentration (Chl_ a) and colored dissolved organic matter (CDOM), and with 20 match-ups for suspended particulate matter concentration (SPM). Most of the match-ups are located close to shore and thus the validation should be interpreted limiting to near-shore coastal waters. The R rs match-ups showed the mean relative errors of 18-33% for the visible bands with the lowest 18-19% for the 490 nm and 555 nm bands and 33% for the 412 nm band. Correlation for the R rs match-ups was high in the 490-865 nm bands (R2=0.72-0.84) and lower in the 412 nm band (R2=0.43) and 443 nm band (R2=0.66). The match-ups for Chl_ a showed a low correlation (<0.41) although the mean absolute percentage error was 35% for the GOCI standard Chl_ a. The CDOM match-ups showed an even worse comparison with R2<0.2. These match-up comparison for Chl_ a and CDOM would imply the difficulty to estimate Chl_ a and CDOM in near-shore waters where the variability in SPM would dominate the variability in R rs . Clearly, the match-up statistics for SPM was better with R2=0.73 and 0.87 for two evaluated algorithms, although GOCI-derived SPM overestimated low concentration and underestimated high concentration. Based on this initial match-up analysis, we made several recommendations -1) to collect more offshore under-water measurements of the R rs data, 2) to include quality flags in level-2 products, 3) to introduce an ISRD correction in the GOCI processing chain, 4) to investigate other types of in-water algorithms such as semianalytical ones, and 5) to investigate vicarious calibration for GOCI data and to maintain accurate and consistent calibration of field radiometric instruments.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Observation and analysis of high-speed human motion with frequent occlusion in a large area
NASA Astrophysics Data System (ADS)
Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng
2009-12-01
The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones
Zhou, Yan; Zheng, Xianwei; Chen, Ruizhi; Xiong, Hanjiang; Guo, Sheng
2018-01-01
Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR) to achieve low-cost and bias-free pedestrian tracking. However, this works only in areas with dense map constraints and the error accumulates in open areas. In order to achieve reliable localization without map constraints, an improved image-based localization aided pedestrian trajectory estimation method is proposed in this paper. The image-based localization recovers the pose of the camera from the 2D-3D correspondences between the 2D image positions and the 3D points of the scene model, previously reconstructed by a structure-from-motion (SfM) pipeline. This enables us to determine the initial location and eliminate the accumulative error of PDR when an image is successfully registered. However, the image is not always registered since the traditional 2D-to-3D matching rejects more and more correct matches when the scene becomes large. We thus adopt a robust image registration strategy that recovers initially unregistered images by integrating 3D-to-2D search. In the process, the visibility and co-visibility information is adopted to improve the efficiency when searching for the correspondences from both sides. The performance of the proposed method was evaluated through several experiments and the results demonstrate that it can offer highly acceptable pedestrian localization results in long-term tracking, with an error of only 0.56 m, without the need for dedicated infrastructures. PMID:29342123
Diaphragm motion quantification in megavoltage cone-beam CT projection images.
Chen, Mingqing; Siochi, R Alfredo
2010-05-01
To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.
Austin, Peter C
2017-02-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-12-09
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
Verifying speculative multithreading in an application
Felton, Mitchell D
2014-11-18
Verifying speculative multithreading in an application executing in a computing system, including: executing one or more test instructions serially thereby producing a serial result, including insuring that all data dependencies among the test instructions are satisfied; executing the test instructions speculatively in a plurality of threads thereby producing a speculative result; and determining whether a speculative multithreading error exists including: comparing the serial result to the speculative result and, if the serial result does not match the speculative result, determining that a speculative multithreading error exists.
A human auditory tuning curves matched wavelet function.
Abolhassani, Mohammad D; Salimpour, Yousef
2008-01-01
This paper proposes a new quantitative approach to the problem of matching a wavelet function to a human auditory tuning curves. The auditory filter shapes were derived from the psychophysical measurements in normal-hearing listeners using the variant of the notched-noise method for brief signals in forward and simultaneous masking. These filters were used as templates for the designing a wavelet function that has the maximum matching to a tuning curve. The scaling function was calculated from the matched wavelet function and by using these functions, low pass and high pass filters were derived for the implementation of a filter bank. Therefore, new wavelet families were derived.
A Robust and Affordable Table Indexing Approach for Multi-isocenter Dosimetrically Matched Fields.
Yu, Amy; Fahimian, Benjamin; Million, Lynn; Hsu, Annie
2017-05-23
Purpose Radiotherapy treatment planning of extended volume typically necessitates the utilization of multiple field isocenters and abutting dosimetrically matched fields in order to enable coverage beyond the field size limits. A common example includes total lymphoid irradiation (TLI) treatments, which are conventionally planned using dosimetric matching of the mantle, para-aortic/spleen, and pelvic fields. Due to the large irradiated volume and system limitations, such as field size and couch extension, a combination of couch shifts and sliding of patients are necessary to be correctly executed for accurate delivery of the plan. However, shifting of patients presents a substantial safety issue and has been shown to be prone to errors ranging from minor deviations to geometrical misses warranting a medical event. To address this complex setup and mitigate the safety issues relating to delivery, a practical technique for couch indexing of TLI treatments has been developed and evaluated through a retrospective analysis of couch position. Methods The indexing technique is based on the modification of the commonly available slide board to enable indexing of the patient position. Modifications include notching to enable coupling with indexing bars, and the addition of a headrest used to fixate the head of the patient relative to the slide board. For the clinical setup, a Varian Exact Couch TM (Varian Medical Systems, Inc, Palo Alto, CA) was utilized. Two groups of patients were treated: 20 patients with table indexing and 10 patients without. The standard deviations (SDs) of the couch positions in longitudinal, lateral, and vertical directions through the entire treatment cycle for each patient were calculated and differences in both groups were analyzed with Student's t-test. Results The longitudinal direction showed the largest improvement. In the non-indexed group, the positioning SD ranged from 2.0 to 7.9 cm. With the indexing device, the positioning SD was reduced to a range of 0.4 to 1.3 cm (p < 0.05 with 95% confidence level). The lateral positioning was slightly improved (p < 0.05 with 95% confidence level), while no improvement was observed in the vertical direction. Conclusions The conventional matched field TLI treatment is error-prone to geometrical setup error. The feasibility of full indexing TLI treatments was validated and shown to result in a significant reduction of positioning and shifting errors.
Black Hole and Galaxy Coevolution from Continuity Equation and Abundance Matching
NASA Astrophysics Data System (ADS)
Aversa, R.; Lapi, A.; de Zotti, G.; Shankar, F.; Danese, L.
2015-09-01
We investigate the coevolution of galaxies and hosted supermassive black holes (BHs) throughout the history of the universe by a statistical approach based on the continuity equation and the abundance matching technique. Specifically, we present analytical solutions of the continuity equation without source terms to reconstruct the supermassive BH mass function from the active galactic nucleus (AGN) luminosity functions. Such an approach includes physically motivated AGN light curves tested on independent data sets, which describe the evolution of the Eddington ratio and radiative efficiency from slim- to thin-disk conditions. We nicely reproduce the local estimates of the BH mass function, the AGN duty cycle as a function of mass and redshift, along with the Eddington ratio function and the fraction of galaxies with given stellar mass hosting an AGN with given Eddington ratio. We exploit the same approach to reconstruct the observed stellar mass function at different redshift from the ultraviolet and far-IR luminosity functions associated with star formation in galaxies. These results imply that the build-up of stars and BHs in galaxies occurs via in situ processes, with dry mergers playing a marginal role at least for stellar masses ≲ 3× {10}11 {M}⊙ and BH masses ≲ {10}9 {M}⊙ , where the statistical data are more secure and less biased by systematic errors. In addition, we develop an improved abundance matching technique to link the stellar and BH content of galaxies to the gravitationally dominant dark matter (DM) component. The resulting relationships constitute a testbed for galaxy evolution models, highlighting the complementary role of stellar and AGN feedback in the star formation process. In addition, they may be operationally implemented in numerical simulations to populate DM halos or to gauge subgrid physics. Moreover, they may be exploited to investigate the galaxy/AGN clustering as a function of redshift, mass, and/or luminosity. In fact, the clustering properties of BHs and galaxies are found to be in full agreement with current observations, thus further validating our results from the continuity equation. Finally, our analysis highlights that (i) the fraction of AGNs observed in the slim-disk regime, where most of the BH mass is accreted, increases with redshift; and (ii) already at z≳ 6 a substantial amount of dust must have formed over timescales ≲ {10}8 yr in strongly star-forming galaxies, making these sources well within the reach of ALMA surveys in (sub)millimeter bands.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Gómez, Pablo; Schützenberger, Anne; Kniesburges, Stefan; Bohr, Christopher; Döllinger, Michael
2018-06-01
This study presents a framework for a direct comparison of experimental vocal fold dynamics data to a numerical two-mass-model (2MM) by solving the corresponding inverse problem of which parameters lead to similar model behavior. The introduced 2MM features improvements such as a variable stiffness and a modified collision force. A set of physiologically sensible degrees of freedom is presented, and three optimization algorithms are compared on synthetic vocal fold trajectories. Finally, a total of 288 high-speed video recordings of six excised porcine larynges were optimized to validate the proposed framework. Particular focus lay on the subglottal pressure, as the experimental subglottal pressure is directly comparable to the model subglottal pressure. Fundamental frequency, amplitude and objective function values were also investigated. The employed 2MM is able to replicate the behavior of the porcine vocal folds very well. The model trajectories' fundamental frequency matches the one of the experimental trajectories in [Formula: see text] of the recordings. The relative error of the model trajectory amplitudes is on average [Formula: see text]. The experiments feature a mean subglottal pressure of 10.16 (SD [Formula: see text]) [Formula: see text]; in the model, it was on average 7.61 (SD [Formula: see text]) [Formula: see text]. A tendency of the model to underestimate the subglottal pressure is found, but the model is capable of inferring trends in the subglottal pressure. The average absolute error between the subglottal pressure in the model and the experiment is 2.90 (SD [Formula: see text]) [Formula: see text] or [Formula: see text]. A detailed analysis of the factors affecting the accuracy in matching the subglottal pressure is presented.
SU-F-T-313: Clinical Results of a New Customer Acceptance Test for Elekta VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusk, B; Fontenot, J
Purpose: To report the results of a customer acceptance test (CAT) for VMAT treatments for two matched Elekta linear accelerators. Methods: The CAT tests were performed on two clinically matched Elekta linear accelerators equipped with a 160-leaf MLC. Functional tests included performance checks of the control system during dynamic movements of the diaphragms, MLC, and gantry. Dosimetric tests included MLC picket fence tests at static and variable dose rates and a diaphragm alignment test, all performed using the on-board EPID. Additionally, beam symmetry during arc delivery was measured at the four cardinal angles for high and low dose rate modesmore » using a 2D detector array. Results of the dosimetric tests were analyzed using the VMAT CAT analysis tool. Results: Linear accelerator 1 (LN1) met all stated CAT tolerances. Linear accelerator 2 (LN2) passed the geometric, beam symmetry, and MLC position error tests but failed the relative dose average test for the diaphragm abutment and all three picket fence fields. Though peak doses in the abutment regions were consistent, the average dose was below the stated tolerance corresponding to a leaf junction that was too narrow. Despite this, no significant differences in patient specific VMAT quality assurance measured were observed between the accelerators and both passed monthly MLC quality assurance performed with the Hancock test. Conclusion: Results from the CAT showed LN2 with relative dose averages in the abutment regions of the diaphragm and MLC tests outside the tolerances resulting from differences in leaf gap distances. Tolerances of the dose average tests from the CAT may be small enough to detect MLC errors which do not significantly affect patient QA or the routine MLC tests.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Jee, K; Sharp, G
Purpose: Proton radiography, which images the patients with the same type of particles that they are to be treated with, is a promising approach for image guidance and range uncertainties reduction. This study aimed to realize quality proton radiography by measuring dose rate functions (DRF) in time domain using a single flat panel and retrieve water equivalent path length (WEPL) from them. Methods: An amorphous silicon flat panel (PaxScan™ 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) was placed behind phantoms to measure DRFs from a proton beam modulated by the modulator wheel. To retrieve WEPL and RSP, calibration modelsmore » based on the intensity of DRFs only, root mean square (RMS) of DRFs only and the intensity weighted RMS were tested. The quality of obtained WEPL images (in terms of spatial resolution and level of details) and the accuracy of WEPL were compared. Results: RSPs for most of the Gammex phantom inserts were retrieved within ± 1% errors by calibration models based on the RMS and intensity weighted RMS. The mean percentage error for all inserts was reduced from 1.08% to 0.75% by matching intensity in the calibration model. In specific cases such as the insert with a titanium rod, the calibration model based on RMS only fails while the that based on intensity weighted RMS is still valid. The quality of retrieved WEPL images were significantly improved for calibration models including intensity matching. Conclusion: For the first time, a flat panel, which is readily available in the beamline for image guidance, was tested to acquire quality proton radiography with WEPL accurately retrieved from it. This technique is promising to be applied for image-guided proton therapy as well as patient specific RSP determination to reduce uncertainties of beam ranges.« less
Matching weights to simultaneously compare three treatment groups: Comparison to three-way matching
Yoshida, Kazuki; Hernández-Díaz, Sonia; Solomon, Daniel H.; Jackson, John W.; Gagne, Joshua J.; Glynn, Robert J.; Franklin, Jessica M.
2017-01-01
BACKGROUND Propensity score matching is a commonly used tool. However, its use in settings with more than two treatment groups has been less frequent. We examined the performance of a recently developed propensity score weighting method in the three treatment group setting. METHODS The matching weight method is an extension of inverse probability of treatment weighting (IPTW) that reweights both exposed and unexposed groups to emulate a propensity score matched population. Matching weights can generalize to multiple treatment groups. The performance of matching weights in the three-group setting was compared via simulation to three-way 1:1:1 propensity score matching and IPTW. We also applied these methods to an empirical example that compared the safety of three analgesics. RESULTS Matching weights had similar bias, but better mean squared error (MSE) compared to three-way matching in all scenarios. The benefits were more pronounced in scenarios with a rare outcome, unequally sized treatment groups, or poor covariate overlap. IPTW’s performance was highly dependent on covariate overlap. In the empirical example, matching weights achieved the best balance for 24 out of 35 covariates. Hazard ratios were numerically similar to matching. However, the confidence intervals were narrower for matching weights. CONCLUSIONS Matching weights demonstrated improved performance over three-way matching in terms of MSE, particularly in simulation scenarios where finding matched subjects was difficult. Given its natural extension to settings with even more than three groups, we recommend matching weights for comparing outcomes across multiple treatment groups, particularly in settings with rare outcomes or unequal exposure distributions. PMID:28151746
Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching
2016-01-01
Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Programmed Cell Death and Caspase Functions During Neural Development.
Yamaguchi, Yoshifumi; Miura, Masayuki
2015-01-01
Programmed cell death (PCD) is a fundamental component of nervous system development. PCD serves as the mechanism for quantitative matching of the number of projecting neurons and their target cells through direct competition for neurotrophic factors in the vertebrate peripheral nervous system. In addition, PCD plays roles in regulating neural cell numbers, canceling developmental errors or noise, and tissue remodeling processes. These findings are mainly derived from genetic studies that prevent cells from dying by apoptosis, which is a major form of PCD and is executed by activation of evolutionarily conserved cysteine protease caspases. Recent studies suggest that caspase activation can be coordinated in time and space at multiple levels, which might underlie nonapoptotic roles of caspases in neural development in addition to apoptotic roles. © 2015 Elsevier Inc. All rights reserved.
Cognitive correlates of financial abilities in mild cognitive impairment.
Okonkwo, Ozioma C; Wadley, Virginia G; Griffith, H Randall; Ball, Karlene; Marson, Daniel C
2006-11-01
To investigate the cognitive correlates of financial abilities in mild cognitive impairment (MCI). Controlled, matched-sample, cross-sectional analysis regressing five cognitive composites on financial performance measures. University medical and research centers. Forty-three persons with MCI and 43 normal controls. The Financial Capacity Instrument (FCI) and a comprehensive neurocognitive battery. Patients with MCI performed significantly worse than controls on cognitive domains of executive function, memory, and language and on FCI domains of financial conceptual knowledge, bank statement management, and bill payment. Patients with MCI also needed significantly more time to complete a multistep financial task and were significantly more likely than controls to make errors on this task. Stepwise regression models revealed that, within the MCI group, attention and executive function were significant correlates of FCI performance. Although impaired memory is the cardinal deficit in MCI, the neurocognitive basis of lower functional performance in MCI appears to be emergent declines in abilities to selectively attend, self-monitor, and temporally integrate information. Compromised performance on cognitive measures of attention and executive function may constitute clinical markers of lower financial abilities and should be evaluated for its relationship to functional ability in general. These cognitive domains may be appropriate targets of future intervention studies aimed at preservation of functional independence in people with MCI.
Detection of content adaptive LSB matching: a game theory approach
NASA Astrophysics Data System (ADS)
Denemark, Tomáš; Fridrich, Jessica
2014-02-01
This paper is an attempt to analyze the interaction between Alice and Warden in Steganography using the Game Theory. We focus on the modern steganographic embedding paradigm based on minimizing an additive distortion function. The strategies of both players comprise of the probabilistic selection channel. The Warden is granted the knowledge of the payload and the embedding costs, and detects embedding using the likelihood ratio. In particular, the Warden is ignorant about the embedding probabilities chosen by Alice. When adopting a simple multivariate Gaussian model for the cover, the payoff function in the form of the Warden's detection error can be numerically evaluated for a mutually independent embedding operation. We demonstrate on the example of a two-pixel cover that the Nash equilibrium is different from the traditional Alice's strategy that minimizes the KL divergence between cover and stego objects under an omnipotent Warden. Practical implications of this case study include computing the loss per pixel of Warden's ability to detect embedding due to her ignorance about the selection channel.
Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio
2008-07-19
The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.
NASA Astrophysics Data System (ADS)
Shchinnikov, P. A.; Safronov, A. V.
2014-12-01
General principles of a procedure for matching energy balances of thermal power plants (TPPs), whose use enhances the accuracy of information-measuring systems (IMSs) during calculations of performance characteristics (PCs), are stated. To do this, there is the possibility for changing values of measured and calculated variables within intervals determined by measurement errors and regulations. An example of matching energy balances of the thermal power plants with a T-180 turbine is made. The proposed procedure allows one to reduce the divergence of balance equations by 3-4 times. It is shown also that the equipment operation mode affects the profit deficiency. Dependences for the divergence of energy balances on the deviation of input parameters and calculated data for the fuel economy before and after matching energy balances are represented.
THE ATMOSPHERIC MODEL EVALUATION (AMET): METEOROLOGY MODULE
An Atmospheric Model Evaluation Tool (AMET), composed of meteorological and air quality components, is being developed to examine the error and uncertainty in the model simulations. AMET matches observations with the corresponding model-estimated values in space and time, and the...
Cultural Variability in Crew Discourse
NASA Technical Reports Server (NTRS)
Fischer, Ute
1999-01-01
Four studies were conducted to determine features of effective crew communication in response to errors during flight. Study One examined whether US captains and first officers use different communication strategies to correct errors and problems on the flight deck, and whether their communications are affected by the two situation variables, level of risk and degree of face-threat involved in challenging an error. Study Two was the cross-cultural extension of Study One and involved pilots from three European countries. Study Three compared communication strategies of female and male air carrier pilots who were matched in terms of years and type of aircraft experience. The final study assessed the effectiveness of the communication strategies observed in Study One.
Hird, Megan A; Vesely, Kristin A; Fischer, Corinne E; Graham, Simon J; Naglie, Gary; Schweizer, Tom A
2017-01-01
The areas of driving impairment characteristic of mild cognitive impairment (MCI) remain unclear. This study compared the simulated driving performance of 24 individuals with MCI, including amnestic single-domain (sd-MCI, n = 11) and amnestic multiple-domain MCI (md-MCI, n = 13), and 20 age-matched controls. Individuals with MCI committed over twice as many driving errors (20.0 versus 9.9), demonstrated difficulty with lane maintenance, and committed more errors during left turns with traffic compared to healthy controls. Specifically, individuals with md-MCI demonstrated greater driving difficulty compared to healthy controls, relative to those with sd-MCI. Differentiating between different subtypes of MCI may be important when evaluating driving safety.
Kamomae, Takeshi; Monzen, Hajime; Nakayama, Shinichi; Mizote, Rika; Oonishi, Yuuichi; Kaneshige, Soichiro; Sakamoto, Takashi
2015-01-01
Movement of the target object during cone-beam computed tomography (CBCT) leads to motion blurring artifacts. The accuracy of manual image matching in image-guided radiotherapy depends on the image quality. We aimed to assess the accuracy of target position localization using free-breathing CBCT during stereotactic lung radiotherapy. The Vero4DRT linear accelerator device was used for the examinations. Reference point discrepancies between the MV X-ray beam and the CBCT system were calculated using a phantom device with a centrally mounted steel ball. The precision of manual image matching between the CBCT and the averaged intensity (AI) images restructured from four-dimensional CT (4DCT) was estimated with a respiratory motion phantom, as determined in evaluations by five independent operators. Reference point discrepancies between the MV X-ray beam and the CBCT image-guidance systems, categorized as left-right (LR), anterior-posterior (AP), and superior-inferior (SI), were 0.33 ± 0.09, 0.16 ± 0.07, and 0.05 ± 0.04 mm, respectively. The LR, AP, and SI values for residual errors from manual image matching were -0.03 ± 0.22, 0.07 ± 0.25, and -0.79 ± 0.68 mm, respectively. The accuracy of target position localization using the Vero4DRT system in our center was 1.07 ± 1.23 mm (2 SD). This study experimentally demonstrated the sufficient level of geometric accuracy using the free-breathing CBCT and the image-guidance system mounted on the Vero4DRT. However, the inter-observer variation and systematic localization error of image matching substantially affected the overall geometric accuracy. Therefore, when using the free-breathing CBCT images, careful consideration of image matching is especially important.
Comparison of Alternate and Original Items on the Montreal Cognitive Assessment
Lebedeva, Elena; Huang, Mei; Koski, Lisa
2016-01-01
Background The Montreal Cognitive Assessment (MoCA) is a screening tool for mild cognitive impairment (MCI) in elderly individuals. We hypothesized that measurement error when using the new alternate MoCA versions to monitor change over time could be related to the use of items that are not of comparable difficulty to their corresponding originals of similar content. The objective of this study was to compare the difficulty of the alternate MoCA items to the original ones. Methods Five selected items from alternate versions of the MoCA were included with items from the original MoCA administered adaptively to geriatric outpatients (N = 78). Rasch analysis was used to estimate the difficulty level of the items. Results None of the five items from the alternate versions matched the difficulty level of their corresponding original items. Conclusions This study demonstrates the potential benefits of a Rasch analysis-based approach for selecting items during the process of development of parallel forms. The results suggest that better match of the items from different MoCA forms by their difficulty would result in higher sensitivity to changes in cognitive function over time. PMID:27076861
Acoustic features of objects matched by an echolocating bottlenose dolphin.
Delong, Caroline M; Au, Whitlow W L; Lemonds, David W; Harley, Heidi E; Roitblat, Herbert L
2006-03-01
The focus of this study was to investigate how dolphins use acoustic features in returning echolocation signals to discriminate among objects. An echolocating dolphin performed a match-to-sample task with objects that varied in size, shape, material, and texture. After the task was completed, the features of the object echoes were measured (e.g., target strength, peak frequency). The dolphin's error patterns were examined in conjunction with the between-object variation in acoustic features to identify the acoustic features that the dolphin used to discriminate among the objects. The present study explored two hypotheses regarding the way dolphins use acoustic information in echoes: (1) use of a single feature, or (2) use of a linear combination of multiple features. The results suggested that dolphins do not use a single feature across all object sets or a linear combination of six echo features. Five features appeared to be important to the dolphin on four or more sets: the echo spectrum shape, the pattern of changes in target strength and number of highlights as a function of object orientation, and peak and center frequency. These data suggest that dolphins use multiple features and integrate information across echoes from a range of object orientations.
Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation
NASA Astrophysics Data System (ADS)
Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.
2018-05-01
Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.
Taxamatch, an Algorithm for Near (‘Fuzzy’) Matching of Scientific Names in Taxonomic Databases
Rees, Tony
2014-01-01
Misspellings of organism scientific names create barriers to optimal storage and organization of biological data, reconciliation of data stored under different spelling variants of the same name, and appropriate responses from user queries to taxonomic data systems. This study presents an analysis of the nature of the problem from first principles, reviews some available algorithmic approaches, and describes Taxamatch, an improved name matching solution for this information domain. Taxamatch employs a custom Modified Damerau-Levenshtein Distance algorithm in tandem with a phonetic algorithm, together with a rule-based approach incorporating a suite of heuristic filters, to produce improved levels of recall, precision and execution time over the existing dynamic programming algorithms n-grams (as bigrams and trigrams) and standard edit distance. Although entirely phonetic methods are faster than Taxamatch, they are inferior in the area of recall since many real-world errors are non-phonetic in nature. Excellent performance of Taxamatch (as recall, precision and execution time) is demonstrated against a reference database of over 465,000 genus names and 1.6 million species names, as well as against a range of error types as present at both genus and species levels in three sets of sample data for species and four for genera alone. An ancillary authority matching component is included which can be used both for misspelled names and for otherwise matching names where the associated cited authorities are not identical. PMID:25247892
NASA Astrophysics Data System (ADS)
Kornilin, Dmitriy V.; Kudryavtsev, Ilya A.; McMillan, Alison J.; Osanlou, Ardeshir; Ratcliffe, Ian
2017-06-01
Modern hydraulic systems should be monitored on the regular basis. One of the most effective ways to address this task is utilizing in-line automatic particle counters (APC) built inside of the system. The measurement of particle concentration in hydraulic liquid by APC is crucial because increasing numbers of particles should mean functional problems. Existing automatic particle counters have significant limitation for the precise measurement of relatively low concentration of particle in aerospace systems or they are unable to measure higher concentration in industrial ones. Both issues can be addressed by implementation of the CMOS image sensor instead of single photodiode used in the most of APC. CMOS image sensor helps to overcome the problem of the errors in volume measurement caused by inequality of particle speed inside of tube. Correction is based on the determination of the particle position and parabolic velocity distribution profile. Proposed algorithms are also suitable for reducing the errors related to the particles matches in measurement volume. The results of simulation show that the accuracy increased up to 90 per cent and the resolution improved ten times more compared to the single photodiode sensor.
Secular variation of activity in comets 2P/Encke and 9P/Tempel 1
NASA Technical Reports Server (NTRS)
Haken, Michael; AHearn, Michael F.; Feldman, Paul D.; Budzien, Scott A.
1995-01-01
We compare production rates of H20 derived from International Ultraviolet Explorer (IUE) spectra from multiple apparitions of 2 comets, 2P/Encke and 9P/Tempel 1, whose orbits are in near-resonance with that of the Earth. Since model-induced errors are primarily a function of observing geometry, the close geometrical matches afforded by the resonance condition results in the cancellation of such errors when taking ratios of production rates. Giving careful attention to the variation of model parameters with solar activity, we find marginal evidence of change in 2P/Encke: a 1-sigma pre-perihelion decrease averaging 4%/revolution over 4 apparitions from 1980-1994, and a 1-sigma post-perihelion increase of 16%/revolution for 2 successive apparitions in 1984 and 1987. We find for 9P/Tempel 1, however, a 7-sigma decrease of 29%/revolution over 3 apparitions from 1983-1994, even after correcting for a tracking problem which made the fluxes systematically low. We speculate on a possible association of the character of long-term brightness variations with physical properties of the nucleus, and discuss implications for future research.
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Effects of Age-Related Macular Degeneration on Driving Performance
Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Kwan, Anthony S.; Owsley, Cynthia
2018-01-01
Purpose To explore differences in driving performance of older adults with age-related macular degeneration (AMD) and age-matched controls, and to identify the visual determinants of driving performance in this population. Methods Participants included 33 older drivers with AMD (mean age [M] = 76.6 ± 6.1 years; better eye Age-Related Eye Disease Study grades: early [61%] and intermediate [39%]) and 50 age-matched controls (M = 74.6 ± 5.0 years). Visual tests included visual acuity, contrast sensitivity, visual fields, and motion sensitivity. On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist (masked to drivers' visual status). Outcome measures included driving safety ratings (scale of 1–10, where higher values represented safer driving), types of driving behavior errors, locations at which errors were made, and number of critical errors (CE) requiring an instructor intervention. Results Drivers with AMD were rated as less safe than controls (4.8 vs. 6.2; P = 0.012); safety ratings were associated with AMD severity (early: 5.5 versus intermediate: 3.7), even after adjusting for age. Drivers with AMD had higher CE rates than controls (1.42 vs. 0.36, respectively; rate ratio 3.05, 95% confidence interval 1.47–6.36, P = 0.003) and exhibited more observation, lane keeping, and gap selection errors and made more errors at traffic light–controlled intersections (P < 0.05). Only motion sensitivity was significantly associated with driving safety in the AMD drivers (P = 0.005). Conclusions Drivers with early and intermediate AMD can exhibit impairments in their driving performance, particularly during complex driving situations; motion sensitivity was most strongly associated with driving performance. These findings have important implications for assessing the driving ability of older drivers with visual impairment. PMID:29340641
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Lee, Hong-Tao
1989-01-01
A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.
Stereo matching using census cost over cross window and segmentation-based disparity refinement
NASA Astrophysics Data System (ADS)
Li, Qingwu; Ni, Jinyan; Ma, Yunpeng; Xu, Jinxin
2018-03-01
Stereo matching is a vital requirement for many applications, such as three-dimensional (3-D) reconstruction, robot navigation, object detection, and industrial measurement. To improve the practicability of stereo matching, a method using census cost over cross window and segmentation-based disparity refinement is proposed. First, a cross window is obtained using distance difference and intensity similarity in binocular images. Census cost over the cross window and color cost are combined as the matching cost, which is aggregated by the guided filter. Then, winner-takes-all strategy is used to calculate the initial disparities. Second, a graph-based segmentation method is combined with color and edge information to achieve moderate under-segmentation. The segmented regions are classified into reliable regions and unreliable regions by consistency checking. Finally, the two regions are optimized by plane fitting and propagation, respectively, to match the ambiguous pixels. The experimental results are on Middlebury Stereo Datasets, which show that the proposed method has good performance in occluded and discontinuous regions, and it obtains smoother disparity maps with a lower average matching error rate compared with other algorithms.
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Learning by observation: insights from Williams syndrome.
Foti, Francesca; Menghini, Deny; Mandolesi, Laura; Federico, Francesca; Vicari, Stefano; Petrosini, Laura
2013-01-01
Observing another person performing a complex action accelerates the observer's acquisition of the same action and limits the time-consuming process of learning by trial and error. Observational learning makes an interesting and potentially important topic in the developmental domain, especially when disorders are considered. The implications of studies aimed at clarifying whether and how this form of learning is spared by pathology are manifold. We focused on a specific population with learning and intellectual disabilities, the individuals with Williams syndrome. The performance of twenty-eight individuals with Williams syndrome was compared with that of mental age- and gender-matched thirty-two typically developing children on tasks of learning of a visuo-motor sequence by observation or by trial and error. Regardless of the learning modality, acquiring the correct sequence involved three main phases: a detection phase, in which participants discovered the correct sequence and learned how to perform the task; an exercise phase, in which they reproduced the sequence until performance was error-free; an automatization phase, in which by repeating the error-free sequence they became accurate and speedy. Participants with Williams syndrome beneficiated of observational training (in which they observed an actor detecting the visuo-motor sequence) in the detection phase, while they performed worse than typically developing children in the exercise and automatization phases. Thus, by exploiting competencies learned by observation, individuals with Williams syndrome detected the visuo-motor sequence, putting into action the appropriate procedural strategies. Conversely, their impaired performances in the exercise phases appeared linked to impaired spatial working memory, while their deficits in automatization phases to deficits in processes increasing efficiency and speed of the response. Overall, observational experience was advantageous for acquiring competencies, since it primed subjects' interest in the actions to be performed and functioned as a catalyst for executed action.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, S; Currier, B; Hodgdon, A
Purpose: The design of a new Portable Faraday Cup (PFC) used to calibrate proton accelerators was evaluated for energies between 50 and 220 MeV. Monte Carlo simulations performed in Geant4–10.0 were used to evaluate experimental results and reduce the relative detector error for this vacuum-less and low mass system, and invalidate current MCNP releases. Methods: The detector construction consisted of a copper conductor coated with an insulator and grounded with silver. Monte Carlo calculations in Geant4 were used to determine the net charge per proton input (gain) as a function of insulator thickness and beam energy. Kapton was chosen asmore » the insulating material and was designed to capture backscattered electrons. Charge displacement from/into Kapton was assumed to follow a linear proportionality to the origin/terminus depth toward the outer ground layer. Kapton thicknesses ranged from 0 to 200 microns, proton energies were set to match empirical studies ranging from 70 to 250 MeV. Each setup was averaged over 1 million events using the FTFP-BERT 2.0 physics list. Results: With increasing proton energy, the gain of Cu+KA gradually converges to the limit of pure copper, with relative error between 1.52% and 0.72%. The Ag layer created a more diverging behavior, accelerating the flux of negative charge into the device and increasing relative error when compared to pure copper from 1.21% to 1.63%. Conclusion: Gain vs. beam energy signatures were acquired for each device. Further analysis reveals proportionality between insulator thickness and measured gain, albeit an inverse proportionality between beam energy and in-flux of electrons. Increased silver grounding layer thickness also decreases gain, though the relative error expands with beam energy, contrary to the Kapton layer.« less
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Jacquemin, Bénédicte; Lepeule, Johanna; Boudier, Anne; Arnould, Caroline; Benmerad, Meriem; Chappaz, Claire; Ferran, Joane; Kauffmann, Francine; Morelli, Xavier; Pin, Isabelle; Pison, Christophe; Rios, Isabelle; Temam, Sofia; Künzli, Nino; Slama, Rémy
2013-01-01
Background: Errors in address geocodes may affect estimates of the effects of air pollution on health. Objective: We investigated the impact of four geocoding techniques on the association between urban air pollution estimated with a fine-scale (10 m × 10 m) dispersion model and lung function in adults. Methods: We measured forced expiratory volume in 1 sec (FEV1) and forced vital capacity (FVC) in 354 adult residents of Grenoble, France, who were participants in two well-characterized studies, the Epidemiological Study on the Genetics and Environment on Asthma (EGEA) and the European Community Respiratory Health Survey (ECRHS). Home addresses were geocoded using individual building matching as the reference approach and three spatial interpolation approaches. We used a dispersion model to estimate mean PM10 and nitrogen dioxide concentrations at each participant’s address during the 12 months preceding their lung function measurements. Associations between exposures and lung function parameters were adjusted for individual confounders and same-day exposure to air pollutants. The geocoding techniques were compared with regard to geographical distances between coordinates, exposure estimates, and associations between the estimated exposures and health effects. Results: Median distances between coordinates estimated using the building matching and the three interpolation techniques were 26.4, 27.9, and 35.6 m. Compared with exposure estimates based on building matching, PM10 concentrations based on the three interpolation techniques tended to be overestimated. When building matching was used to estimate exposures, a one-interquartile range increase in PM10 (3.0 μg/m3) was associated with a 3.72-point decrease in FVC% predicted (95% CI: –0.56, –6.88) and a 3.86-point decrease in FEV1% predicted (95% CI: –0.14, –3.24). The magnitude of associations decreased when other geocoding approaches were used [e.g., for FVC% predicted –2.81 (95% CI: –0.26, –5.35) using NavTEQ, or 2.08 (95% CI –4.63, 0.47, p = 0.11) using Google Maps]. Conclusions: Our findings suggest that the choice of geocoding technique may influence estimated health effects when air pollution exposures are estimated using a fine-scale exposure model. Citation: Jacquemin B, Lepeule J, Boudier A, Arnould C, Benmerad M, Chappaz C, Ferran J, Kauffmann F, Morelli X, Pin I, Pison C, Rios I, Temam S, Künzli N, Slama R, Siroux V. 2013. Impact of geocoding methods on associations between long-term exposure to urban air pollution and lung function. Environ Health Perspect 121:1054–1060; http://dx.doi.org/10.1289/ehp.1206016 PMID:23823697
a Discussion about Effective Ways of Basic Resident Register on GIS
NASA Astrophysics Data System (ADS)
Oku, Naoya; Nonaka, Yasuaki; Ito, Yutaka
2016-06-01
In Japan, each municipality keeps a database of every resident's name, address, gender and date of birth called the Basic Resident Register. If the address information in the register is converted into coordinates by geocoding, it can be plotted as point data on a map. This would enable prompt evacuation from disaster, analysis of distribution of residents, integrating statistics and so on. Further, it can be used for not only analysis of the current situation but also future planning. However, the geographic information system (GIS) incorporating the Basic Resident Register is not widely used in Japan because of the following problems: - Geocoding In order to plot address point data, it is necessary to match the Basic Resident Register and the address dictionary by using the address as a key. The information in the Basic Resident Register does not always match the actual addresses. As the register is based on applications made by residents, the information is prone to errors, such as incorrect Kanji characters. - Security policy on personal information In the register, the address of a resident is linked with his/her name and date of birth. If the information in the Basic Resident Register were to be leaked, it could be used for malicious purposes. This paper proposes solutions to the above problems. The suitable solutions for the problems depend on the purpose of use, thus it is important that the purpose should be defined and a suitable way of the application for each purpose should be chosen. In this paper, we mainly focus on the specific purpose of use: to analyse the distribution of the residents. We provide two solutions to improve the matching rate in geocoding. First, regarding errors in Kanji characters, a correction list of possible errors should be compiled in advance. Second, some sort of analyses such as distribution of residents may not require exactly correct position for the address point. Therefore we set the matching level in order: prefecture, city, town, city-block, house-code, house, and decided to accept up to city-block level for the matching. Moreover, in terms of security policy on personal information, some part of information may not be needed for the distribution analysis. For example, the personal information like resident's name should be excluded from the attribute of address point in order to secure the safety operation of the system.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments
Tang, Jian; Chen, Yuwei; Niu, Xiaoji; Wang, Li; Chen, Liang; Liu, Jingbin; Shi, Chuang; Hyyppä, Juha
2015-01-01
A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies—INS and LiDAR SLAM—into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform—NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment. PMID:26184206
LiDAR Scan Matching Aided Inertial Navigation System in GNSS-Denied Environments.
Tang, Jian; Chen, Yuwei; Niu, Xiaoji; Wang, Li; Chen, Liang; Liu, Jingbin; Shi, Chuang; Hyyppä, Juha
2015-07-10
A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies-INS and LiDAR SLAM-into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform-NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment.
Todd, Gabrielle; Pearson-Dennett, Verity; Flavel, Stanley C.; Haberfield, Miranda; Edwards, Hannah; White, Jason M.
2016-01-01
Little is known about the long-lasting effect of use of illicit stimulant drugs on learning of new motor skills. We hypothesised that abstinent individuals with a history of primarily methamphetamine and ecstasy use would exhibit normal learning of a visuomotor tracking task compared to controls. The study involved three groups: abstinent stimulant users (n = 21; 27 ± 6 yrs) and two gender-matched control groups comprising nondrug users (n = 16; 22 ± 4 yrs) and cannabis users (n = 16; 23 ± 5 yrs). Motor learning was assessed with a three-minute visuomotor tracking task. Subjects were instructed to follow a moving target on a computer screen with movement of the index finger. Metacarpophalangeal joint angle and first dorsal interosseous electromyographic activity were recorded. Pattern matching was assessed by cross-correlation of the joint angle and target traces. Distance from the target (tracking error) was also calculated. Motor learning was evident in the visuomotor task. Pattern matching improved over time (cross-correlation coefficient) and tracking error decreased. However, task performance did not differ between the groups. The results suggest that learning of a new fine visuomotor skill is unchanged in individuals with a history of illicit stimulant use. PMID:26819778
Todd, Gabrielle; Pearson-Dennett, Verity; Flavel, Stanley C; Haberfield, Miranda; Edwards, Hannah; White, Jason M
2016-01-01
Little is known about the long-lasting effect of use of illicit stimulant drugs on learning of new motor skills. We hypothesised that abstinent individuals with a history of primarily methamphetamine and ecstasy use would exhibit normal learning of a visuomotor tracking task compared to controls. The study involved three groups: abstinent stimulant users (n = 21; 27 ± 6 yrs) and two gender-matched control groups comprising nondrug users (n = 16; 22 ± 4 yrs) and cannabis users (n = 16; 23 ± 5 yrs). Motor learning was assessed with a three-minute visuomotor tracking task. Subjects were instructed to follow a moving target on a computer screen with movement of the index finger. Metacarpophalangeal joint angle and first dorsal interosseous electromyographic activity were recorded. Pattern matching was assessed by cross-correlation of the joint angle and target traces. Distance from the target (tracking error) was also calculated. Motor learning was evident in the visuomotor task. Pattern matching improved over time (cross-correlation coefficient) and tracking error decreased. However, task performance did not differ between the groups. The results suggest that learning of a new fine visuomotor skill is unchanged in individuals with a history of illicit stimulant use.
A Novel Way to Relate Ontology Classes
Choksi, Ami T.; Jinwala, Devesh C.
2015-01-01
The existing ontologies in the semantic web typically have anonymous union and intersection classes. The anonymous classes are limited in scope and may not be part of the whole inference process. The tools, namely, the pellet, the jena, and the protégé, interpret collection classes as (a) equivalent/subclasses of union class and (b) superclasses of intersection class. As a result, there is a possibility that the tools will produce error prone inference results for relations, namely, sub-, union, intersection, equivalent relations, and those dependent on these relations, namely, complement. To verify whether a class is complement of other involves utilization of sub- and equivalent relations. Motivated by the same, we (i) refine the test data set of the conference ontology by adding named, union, and intersection classes and (ii) propose a match algorithm to (a) calculate corrected subclasses list, (b) correctly relate intersection and union classes with their collection classes, and (c) match union, intersection, sub-, complement, and equivalent classes in a proper sequence, to avoid error prone match results. We compare the results of our algorithms with those of a candidate reasoner, namely, the pellet reasoner. To the best of our knowledge, ours is a unique attempt in establishing a novel way to relate ontology classes. PMID:25984560
Kärgel, Christian; Massau, Claudia; Weiß, Simone; Walter, Martin; Borchardt, Viola; Krueger, Tillmann H C; Tenbergen, Gilian; Kneer, Jonas; Wittfoth, Matthias; Pohl, Alexander; Gerwinn, Hannah; Ponseti, Jorge; Amelung, Till; Beier, Klaus M; Mohnke, Sebastian; Walter, Henrik; Schiffer, Boris
2017-02-01
Neurobehavioral models of pedophilia and child sexual offending suggest a pattern of temporal and in particular prefrontal disturbances leading to inappropriate behavioral control and subsequently an increased propensity to sexually offend against children. However, clear empirical evidence for such mechanisms is still missing. Using a go/nogo paradigm in combination with functional magnetic resonance imaging (fMRI) we compared behavioral performance and neural response patterns among three groups of men matched for age and IQ: pedophiles with (N = 40) and without (N = 37) a history of hands-on sexual offences against children as well as healthy non-offending controls (N = 40). As compared to offending pedophiles, non-offending pedophiles exhibited superior inhibitory control as reflected by significantly lower rate of commission errors. Group-by-condition interaction analysis also revealed inhibition-related activation in the left posterior cingulate and the left superior frontal cortex that distinguished between offending and non-offending pedophiles, while no significant differences were found between pedophiles and healthy controls. Both areas showing distinct activation pattern among pedophiles play a critical role in linking neural networks that relate to effective cognitive functioning. Data therefore suggest that heightened inhibition-related recruitment of these areas as well as decreased amount of commission errors is related to better inhibitory control in pedophiles who successfully avoid committing hands-on sexual offences against children. Hum Brain Mapp 38:1092-1104, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Comparison of Uninjured and Concussed Adolecent Athletes on the Concussion Balance Test (COBALT).
Massingale, Shelly; Alexander, Amy; Erickson, Steven; McQueary, Elizabeth; Gerkin, Richard; Kisana, Haroon; Silvestri, Briana; Schodrof, Sarah; Nalepa, Bryce; Pardini, Jamie
2018-06-01
Dizziness and balance problems are common symptoms following sports-related concussion (SRC). Most sports require high-level balance skills that integrate the sensory inputs used for balance. Thus, a comprehensive assessment of postural control following SRC is recommended as an integral part of evaluation and management of the injury. The purpose of this exploratory study was to examine performance differences between uninjured and concussed athletes on the Concussion Balance Test (COBALT), as well as complete preliminary analyses of criterion-related validity and reliability of COBALT. COBALT is an 8 condition test developed for both preseason and postinjury assessment using force plate technology to measure sway velocity under dynamic postural conditions that challenge the vestibular system. Retrospective COBALT data obtained through chart review for 132 uninjured athletes and 106 concussed age-matched athletes were compared. All uninjured athletes were able to complete the assessment, compared with only 55% of concussed athletes. Concussed athletes committed significantly more errors than uninjured athletes. Sway velocity for concussed athletes was higher (worse) than that for uninjured athletes on 2 conditions in COBALT. By examining an athlete's ability to complete the protocol, error rate, and sway velocity on COBALT postinjury, the clinician can identify balance function impairment, which may help the medical team develop a more targeted treatment plan, and provide objective input regarding recovery of balance function following SRC.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A204).
The Impact of Financial Reward Contingencies on Cognitive Function Profiles in Adult ADHD
Marx, Ivo; Höpcke, Cornelia; Berger, Christoph; Wandschneider, Roland; Herpertz, Sabine C.
2013-01-01
Objectives Although it is well established that cognitive performance in children with attention-deficit/hyperactivity disorder (ADHD) is affected by reward and that key deficits associated with the disorder may thereby be attenuated or even compensated, this phenomenon in adults with ADHD has thus far not been addressed. Therefore, the aim of the present study was to examine the motivating effect of financial reward on task performance in adults with ADHD by focusing on the domains of executive functioning, attention, time perception, and delay aversion. Methods We examined male and female adults aged 18–40 years with ADHD (n = 38) along with a matched control group (n = 40) using six well-established experimental paradigms. Results Impaired performance in the ADHD group was observed for stop-signal omission errors, n-back accuracy, reaction time variability in the continuous performance task, and time reproduction accuracy, and reward normalized time reproduction accuracy. Furthermore, when rewarded, subjects with ADHD exhibited longer reaction times and fewer false positives in the continuous performance task, which suggests the use of strategies to prevent impulsivity errors. Conclusions Taken together, our results support the existence of both cognitive and motivational mechanisms for the disorder, which is in line with current models of ADHD. Furthermore, our data suggest cognitive strategies of “stopping and thinking” as a possible underlying mechanism for task improvement that seems to be mediated by reward, which highlights the importance of the interaction between motivation and cognition in adult ADHD. PMID:23840573
Striatal dysfunction during reversal learning in unmedicated schizophrenia patients☆
Schlagenhauf, Florian; Huys, Quentin J.M.; Deserno, Lorenz; Rapp, Michael A.; Beck, Anne; Heinze, Hans-Joachim; Dolan, Ray; Heinz, Andreas
2014-01-01
Subjects with schizophrenia are impaired at reinforcement-driven reversal learning from as early as their first episode. The neurobiological basis of this deficit is unknown. We obtained behavioral and fMRI data in 24 unmedicated, primarily first episode, schizophrenia patients and 24 age-, IQ- and gender-matched healthy controls during a reversal learning task. We supplemented our fMRI analysis, focusing on learning from prediction errors, with detailed computational modeling to probe task solving strategy including an ability to deploy an internal goal directed model of the task. Patients displayed reduced functional activation in the ventral striatum (VS) elicited by prediction errors. However, modeling task performance revealed that a subgroup did not adjust their behavior according to an accurate internal model of the task structure, and these were also the more severely psychotic patients. In patients who could adapt their behavior, as well as in controls, task solving was best described by cognitive strategies according to a Hidden Markov Model. When we compared patients and controls who acted according to this strategy, patients still displayed a significant reduction in VS activation elicited by informative errors that precede salient changes of behavior (reversals). Thus, our study shows that VS dysfunction in schizophrenia patients during reward-related reversal learning remains a core deficit even when controlling for task solving strategies. This result highlights VS dysfunction is tightly linked to a reward-related reversal learning deficit in early, unmedicated schizophrenia patients. PMID:24291614
NASA Technical Reports Server (NTRS)
Sun, W.; Loeb, N. G.; Videen, G.; Fu, Q.
2004-01-01
Natural particles such as ice crystals in cirrus clouds generally are not pristine but have additional micro-roughness on their surfaces. A two-dimensional finite-difference time-domain (FDTD) program with a perfectly matched layer absorbing boundary condition is developed to calculate the effect of surface roughness on light scattering by long ice columns. When we use a spatial cell size of 1/120 incident wavelength for ice circular cylinders with size parameters of 6 and 24 at wavelengths of 0.55 and 10.8 mum, respectively, the errors in the FDTD results in the extinction, scattering, and absorption efficiencies are smaller than similar to 0.5%. The errors in the FDTD results in the asymmetry factor are smaller than similar to 0.05%. The errors in the FDTD results in the phase-matrix elements are smaller than similar to 5%. By adding a pseudorandom change as great as 10% of the radius of a cylinder, we calculate the scattering properties of randomly oriented rough-surfaced ice columns. We conclude that, although the effect of small surface roughness on light scattering is negligible, the scattering phase-matrix elements change significantly for particles with large surface roughness. The roughness on the particle surface can make the conventional phase function smooth. The most significant effect of the surface roughness is the decay of polarization of the scattered light.
Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens
2016-01-01
Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044
An all-joint-control master device for single-port laparoscopic surgery robots.
Shim, Seongbo; Kang, Taehun; Ji, Daekeun; Choi, Hyunseok; Joung, Sanghyun; Hong, Jaesung
2016-08-01
Robots for single-port laparoscopic surgery (SPLS) typically have all of their joints located inside abdomen during surgery, whereas with the da Vinci system, only the tip part of the robot arm is inserted and manipulated. A typical master device that controls only the tip with six degrees of freedom (DOFs) is not suitable for use with SPLS robots because of safety concerns. We designed an ergonomic six-DOF master device that can control all of the joints of an SPLS robot. We matched each joint of the master, the slave, and the human arm to decouple all-joint motions of the slave robot. Counterbalance masses were used to reduce operator fatigue. Mapping factors were determined based on kinematic analysis and were used to achieve all-joint control with minimal error at the tip of the slave robot. The proposed master device has two noteworthy features: efficient joint matching to the human arm to decouple each joint motion of the slave robot and accurate mapping factors, which can minimize the trajectory error of the tips between the master and the slave. We confirmed that the operator can manipulate the slave robot intuitively with the master device and that both tips have similar trajectories with minimal error.
A multibiometric face recognition fusion framework with template protection
NASA Astrophysics Data System (ADS)
Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.
2010-04-01
In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.
Modification Site Localization in Peptides.
Chalkley, Robert J
2016-01-01
There are a large number of search engines designed to take mass spectrometry fragmentation spectra and match them to peptides from proteins in a database. These peptides could be unmodified, but they could also bear modifications that were added biologically or during sample preparation. As a measure of reliability for the peptide identification, software normally calculates how likely a given quality of match could have been achieved at random, most commonly through the use of target-decoy database searching (Elias and Gygi, Nat Methods 4(3): 207-214, 2007). Matching the correct peptide but with the wrong modification localization is not a random match, so results with this error will normally still be assessed as reliable identifications by the search engine. Hence, an extra step is required to determine site localization reliability, and the software approaches to measure this are the subject of this part of the chapter.
A dual-adaptive support-based stereo matching algorithm
NASA Astrophysics Data System (ADS)
Zhang, Yin; Zhang, Yun
2017-07-01
Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.
Haegerstrom-Portnoy, G; Schneck, M E; Verdon, W A; Hewlett, S E
1996-07-01
Visual acuity, refractive error, and binocular status were determined in 43 autosomal recessive (AR) and 15 X-linked (XL) congenital achromats. The achromats were classified by color matching and spectral sensitivity data. Large interindividual variation in refractive error and visual acuity was present within each achromat group (complete AR, incomplete AR, and XL). However, the number of individuals with significant interocular acuity differences is very small. Most XLs are myopic; ARs show a wide range of refractive error from high myopia to high hyperopia. Acuity of the AR and XL groups was very similar. With-the-rule astigmatism of large amount is very common in achromats, particularly ARs. There is a close association between strabismus and interocular acuity differences in the ARs, with the fixating eye having better than average acuity. The large overlap of acuity and refractive error of XL and AR achromats suggests that these measures are less useful for differential diagnosis than generally indicated by the clinical literature.
Error analysis of numerical gravitational waveforms from coalescing binary black holes
NASA Astrophysics Data System (ADS)
Fong, Heather; Chu, Tony; Kumar, Prayush; Pfeiffer, Harald; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela; SXS Collaboration
2016-03-01
The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) has finished a successful first observation run and will commence its second run this summer. Detection of compact object binaries utilizes matched-filtering, which requires a vast collection of highly accurate gravitational waveforms. This talk will present a set of about 100 new aligned-spin binary black hole simulations. I will discuss their properties, including a detailed error analysis, which demonstrates that the numerical waveforms are sufficiently accurate for gravitational wave detection purposes, as well as for parameter estimation purposes.
A Comparison of Three PML Treatments for CAA (and CFD)
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2008-01-01
In this paper we compare three Perfectly Matched Layer (PML) treatments by means of a series of numerical experiments, using common numerical algorithms, computational grids, and code implementations. These comparisons are with the Linearized Euler Equations, for base uniform base flow. We see that there are two very good PML candidates, and that can both control the introduced error. Furthermore, we also show that corners can be handled with essentially no increase in the introduced error, and that with a good PML, the outer boundary is the most significant source of err
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, A; Contee, C; White, B
Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Phase History Decomposition for efficient Scatterer Classification in SAR Imagery
2011-09-15
frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter
Experimental validation of wireless communication with chaos.
Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S; Grebogi, Celso
2016-08-01
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.
Experimental validation of wireless communication with chaos
NASA Astrophysics Data System (ADS)
Ren, Hai-Peng; Bai, Chao; Liu, Jian; Baptista, Murilo S.; Grebogi, Celso
2016-08-01
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and an integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
Experimental validation of wireless communication with chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Hai-Peng; Bai, Chao; Liu, Jian
The constraints of a wireless physical media, such as multi-path propagation and complex ambient noises, prevent information from being communicated at low bit error rate. Surprisingly, it has only recently been shown that, from a theoretical perspective, chaotic signals are optimal for communication. It maximises the receiver signal-to-noise performance, consequently minimizing the bit error rate. This work demonstrates numerically and experimentally that chaotic systems can in fact be used to create a reliable and efficient wireless communication system. Toward this goal, we propose an impulsive control method to generate chaotic wave signals that encode arbitrary binary information signals and anmore » integration logic together with the match filter capable of decreasing the noise effect over a wireless channel. The experimental validation is conducted by inputting the signals generated by an electronic transmitting circuit to an electronic circuit that emulates a wireless channel, where the signals travel along three different paths. The output signal is decoded by an electronic receiver, after passing through a match filter.« less
Crossett, Andrew; Kent, Brian P.; Klei, Lambertus; Ringquist, Steven; Trucco, Massimo; Roeder, Kathryn; Devlin, Bernie
2015-01-01
We propose a method to analyze family-based samples together with unrelated cases and controls. The method builds on the idea of matched case–control analysis using conditional logistic regression (CLR). For each trio within the family, a case (the proband) and matched pseudo-controls are constructed, based upon the transmitted and untransmitted alleles. Unrelated controls, matched by genetic ancestry, supplement the sample of pseudo-controls; likewise unrelated cases are also paired with genetically matched controls. Within each matched stratum, the case genotype is contrasted with control pseudo-control genotypes via CLR, using a method we call matched-CLR (mCLR). Eigenanalysis of numerous SNP genotypes provides a tool for mapping genetic ancestry. The result of such an analysis can be thought of as a multidimensional map, or eigenmap, in which the relative genetic similarities and differences amongst individuals is encoded in the map. Once constructed, new individuals can be projected onto the ancestry map based on their genotypes. Successful differentiation of individuals of distinct ancestry depends on having a diverse, yet representative sample from which to construct the ancestry map. Once samples are well-matched, mCLR yields comparable power to competing methods while ensuring excellent control over Type I error. PMID:20862653
Gurari, Netta; Drogos, Justin M.; Dewald, Julius P.A.
2017-01-01
Objective Previous studies determined, using between arms position matching assessments, that at least one-half of individuals with stroke have an impaired position sense. We investigated whether individuals with chronic stroke who have impairments mirroring arm positions also have impairments identifying the location of each arm in space. Methods Participants with chronic hemiparetic stroke and age-matched participants without neurological impairments (controls) performed a between forearms position matching task based on a clinical assessment and a single forearm position matching task, using passive and active movements, based on a robotic assessment. Results 12 out of our 14 participants with stroke who had clinically determined between forearms position matching impairments had greater errors than the controls in both their paretic and non-paretic arm when matching positions during passive movements; yet stroke participants performed comparable to the controls during active movements. Conclusions Many individuals with chronic stroke may have impairments matching positions in both their paretic and non-paretic arm if their arm is moved for them, yet not within either arm if these individuals control their own movements. Significance The neural mechanisms governing arm location perception in the stroke population may differ depending on whether arm movements are made passively versus actively. PMID:27866116
The effect of amblyopia on fine motor skills in children.
Webber, Ann L; Wood, Joanne M; Gole, Glen A; Brown, Brian
2008-02-01
In an investigation of the functional impact of amblyopia in children, the fine motor skills of amblyopes and age-matched control subjects were compared. The influence of visual factors that might predict any decrement in fine motor skills was also explored. Vision and fine motor skills were tested in a group of children (n = 82; mean age, 8.2 +/- 1.7 [SD] years) with amblyopia of different causes (infantile esotropia, n = 17; acquired strabismus, n = 28; anisometropia, n = 15; mixed, n = 13; and deprivation n = 9), and age-matched control children (n = 37; age 8.3 +/- 1.3 years). Visual motor control (VMC) and upper limb speed and dexterity (ULSD) items of the Bruininks-Oseretsky Test of Motor Proficiency were assessed, and logMAR visual acuity (VA) and Randot stereopsis were measured. Multiple regression models were used to identify the visual determinants of fine motor skills performance. Amblyopes performed significantly poorer than control subjects on 9 of 16 fine motor skills subitems and for the overall age-standardized scores for both VMC and ULSD items (P < 0.05). The effects were most evident on timed tasks. The etiology of amblyopia and level of binocular function significantly affected fine motor skill performance on both items; however, when examined in a multiple regression model that took into account the intercorrelation between visual characteristics, poorer fine motor skills performance was associated with strabismus (F(1,75) = 5.428; P = 0.022), but not with the level of binocular function, refractive error, or visual acuity in either eye. Fine motor skills were reduced in children with amblyopia, particularly those with strabismus, compared with control subjects. The deficits in motor performance were greatest on manual dexterity tasks requiring speed and accuracy.
Nakasa, Tomoyuki; Fukuhara, Kohei; Adachi, Nobuo; Ochi, Mitsuo
2008-05-01
Functional instability is defined as a repeated ankle inversion sprain and a giving way sensation. Previous studies have described the damage of sensori-motor control in ankle sprain as being a possible cause of functional instability. The aim of this study was to evaluate the inversion angle replication errors in patients with functional instability after ankle sprain. The difference between the index angle and replication angle was measured in 12 subjects with functional instability, with the aim of evaluating the replication error. As a control group, the replication errors of 17 healthy volunteers were investigated. The side-to-side differences of the replication errors were compared between both the groups, and the relationship between the side-to-side differences of the replication errors and the mechanical instability were statistically analyzed in the unstable group. The side-to-side difference of the replication errors was 1.0 +/- 0.7 degrees in the unstable group and 0.2 +/- 0.7 degrees in the control group. There was a statistically significant difference between both the groups. The side-to-side differences of the replication errors in the unstable group did not statistically correlate to the anterior talar translation and talar tilt. The patients with functional instability had the deficit of joint position sense in comparison with healthy volunteers. The replication error did not correlate to the mechanical instability. The patients with functional instability should be treated appropriately in spite of having less mechanical instability.
Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L
2011-01-01
To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit Dose Distribution System errors at initial, intermediate and final stages of the process, improving the involvement of the Pharmacy Department and ward nurses. Copyright © 2009 SEFH. Published by Elsevier Espana. All rights reserved.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
Evaluation of subset matching methods and forms of covariate balance.
de Los Angeles Resa, María; Zubizarreta, José R
2016-11-30
This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Gordy, R. S.
1972-01-01
An improved broadband impedance matching technique was developed. The technique is capable of resolving points in the waveguide which generate reflected energy. A version of the comparison reflectometer was developed and fabricated to determine the mean amplitude of the reflection coefficient excited at points in the guide as a function of distance, and the complex reflection coefficient of a specific discontinuity in the guide as a function of frequency. An impedance matching computer program was developed which is capable of impedance matching the characteristics of each disturbance independent of other reflections in the guide. The characteristics of four standard matching elements were compiled, and their associated curves of reflection coefficient and shunt susceptance as a function of frequency are presented. It is concluded that an economical, fast, and reliable impedance matching technique has been established which can provide broadband impedance matches.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
Finger muscle attachments for an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.
Finger Muscle Attachments for an OpenSim Upper-Extremity Model
Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869
The function of the left anterior temporal pole: evidence from acute stroke and infarct volume
Tsapkini, Kyrana; Frangakis, Constantine E.
2011-01-01
The role of the anterior temporal lobes in cognition and language has been much debated in the literature over the last few years. Most prevailing theories argue for an important role of the anterior temporal lobe as a semantic hub or a place for the representation of unique entities such as proper names of peoples and places. Lately, a few studies have investigated the role of the most anterior part of the left anterior temporal lobe, the left temporal pole in particular, and argued that the left anterior temporal pole is the area responsible for mapping meaning on to sound through evidence from tasks such as object naming. However, another recent study indicates that bilateral anterior temporal damage is required to cause a clinically significant semantic impairment. In the present study, we tested these hypotheses by evaluating patients with acute stroke before reorganization of structure–function relationships. We compared a group of 20 patients with acute stroke with anterior temporal pole damage to a group of 28 without anterior temporal pole damage matched for infarct volume. We calculated the average percent error in auditory comprehension and naming tasks as a function of infarct volume using a non-parametric regression method. We found that infarct volume was the only predictive variable in the production of semantic errors in both auditory comprehension and object naming tasks. This finding favours the hypothesis that left unilateral anterior temporal pole lesions, even acutely, are unlikely to cause significant deficits in mapping meaning to sound by themselves, although they contribute to networks underlying both naming and comprehension of objects. Therefore, the anterior temporal lobe may be a semantic hub for object meaning, but its role must be represented bilaterally and perhaps redundantly. PMID:21685458
Subthalamic nucleus deep brain stimulation improves somatosensory function in Parkinson's disease.
Aman, Joshua E; Abosch, Aviva; Bebler, Maggie; Lu, Chia-Hao; Konczak, Jürgen
2014-02-01
An established treatment for the motor symptoms of Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Mounting evidence suggests that PD is also associated with somatosensory deficits, yet the effect of STN-DBS on somatosensory processing is largely unknown. This study investigated whether STN-DBS affects somatosensory processing, specifically the processing of tactile and proprioceptive cues, by systematically examining the accuracy of haptic perception of object size. (Haptic perception refers to one's ability to extract object features such as shape and size by active touch.) Without vision, 13 PD patients with implanted STN-DBS and 13 healthy controls haptically explored the heights of 2 successively presented 3-dimensional (3D) blocks using a precision grip. Participants verbally indicated which block was taller and then used their nonprobing hand to motorically match the perceived size of the comparison block. Patients were tested during ON and OFF stimulation, following a 12-hour medication washout period. First, when compared to controls, the PD group's haptic discrimination threshold during OFF stimulation was elevated by 192% and mean hand aperture error was increased by 105%. Second, DBS lowered the haptic discrimination threshold by 26% and aperture error decreased by 20%. Third, during DBS ON, probing with the motorically more affected hand decreased haptic precision compared to probing with the less affected hand. This study offers the first evidence that STN-DBS improves haptic precision, further indicating that somatosensory function is improved by STN-DBS. We conclude that DBS-related improvements are not explained by improvements in motor function alone, but rather by enhanced somatosensory processing. © 2013 Movement Disorder Society.
Maeda, Yoshikazu; Sato, Yoshitaka; Shibata, Satoshi; Bou, Sayuri; Yamamoto, Kazutaka; Tamamura, Hiroyasu; Fuwa, Nobukazu; Takamatsu, Shigeyuki; Sasaki, Makoto; Tameshige, Yuji; Kume, Kyo; Minami, Hiroki; Saga, Yusuke; Saito, Makoto
2018-05-01
We quantified interfractional movements of the prostate, seminal vesicles (SVs), and rectum during computed tomography (CT) image-guided proton therapy for prostate cancer and studied the range variation in opposed lateral proton beams. We analyzed 375 sets of daily CT images acquired throughout the proton therapy treatment of ten patients. We analyzed daily movements of the prostate, SVs, and rectum by simulating three image-matching strategies: bone matching, prostate center (PC) matching, and prostate-rectum boundary (PRB) matching. In the PC matching, translational movements of the prostate center were corrected after bone matching. In the PRB matching, we performed PC matching and correction along the anterior-posterior direction to match the boundary between the prostate and the rectum's anterior region. In each strategy, we evaluated systematic errors (Σ) and random errors (σ) by measuring the daily movements of certain points on each anatomic structure. The average positional deviations in millimeter of each point were determined by the Van Herk formula of 2.5Σ + 0.7σ. Using these positional deviations, we created planning target volumes of the prostate and SVs and analyzed the daily variation in the water equivalent length (WEL) from the skin surface to the target along the lateral beam directions using the density converted from the daily CT number. Based on this analysis, we designed prostate cancer treatment planning and evaluated the dose volume histograms (DVHs) for these strategies. The SVs' daily movements showed large variations over the superior-inferior direction, as did the rectum's anterior region. The average positional deviations of the prostate in the anterior, posterior, superior, inferior, and lateral sides (mm) in bone matching, PC matching, and PRB matching were (8.9, 9.8, 7.5, 3.6, 1.6), (5.6, 6.1, 3.5, 4.5, 1.9), and (8.6, 3.2, 3.5, 4.5, 1.9) (mm), respectively. Moreover, the ones of the SV tip were similarly (22.5, 15.5, 11.0, 7.6, 6.0), (11.8, 8.4, 7.8, 5.2, 6.3), and (9.9, 7.5, 7.8, 5.2, 6.3). PRB matching showed the smallest positional deviations at all portions except for the anterior portion of the prostate and was able to markedly reduce the positional deviations at the posterior portion. The averaged WEL variations at the distal and proximal sides of planning target volumes were estimated 7-9 mm and 4-6 mm, respectively, and showed the increasing of a few millimeters in PC and PRB matching compared to bone matching. In the treatment planning simulation, the DVH values of the rectum in PRB matching were reduced compared to those obtained with other matching strategies. The positional deviations for the prostate on the posterior side and the SVs were smaller by PRB matching than the other strategies and effectively reduced the rectal dose. 3D dose calculations indicate that PRB matching with CT image guidance may do a better job relative to other positioning methods to effectively reduce the rectal complications. The WEL variation was quite large, and the appropriate margin (approx. 10 mm) must be adapted to the proton range in an initial planning to maintain the coverage of target volumes throughout entire treatment. © 2018 American Association of Physicists in Medicine.
Searching for intermediate-mass black holes via optical variability
NASA Astrophysics Data System (ADS)
Adler-Levine, Ryan; Moran, Edward C.; Kay, Laura
2018-01-01
A handful of nearby dwarf galaxies with intermediate-mass black holes (IMBHs) in their nuclei display significant optical variability on short timescales. To investigate whether dwarf galaxy AGNs as a class exhibit similar variability, we have monitored a sample of low-mass galaxies that possess spectroscopically confirmed type 1 AGNs. However, because of the variations in seeing, focus, and guiding errors that occur in images taken at different epochs, analyses based on aperture photometry are ineffective. We have thus developed a new method for matching point-spread functions in images that permits use of image subtraction photometry techniques. Applying this method to our photometric data, we have confirmed that several galaxies with IMBHs are indeed variable, which suggests that variability can be used to search for IMBHs in low-mass galaxies whose emission-line properties are ambiguous.
Optical antenna gain. I - Transmitting antennas
NASA Technical Reports Server (NTRS)
Klein, B. J.; Degnan, J. J.
1974-01-01
The gain of centrally obscured optical transmitting antennas is analyzed in detail. The calculations, resulting in near- and far-field antenna gain patterns, assume a circular antenna illuminated by a laser operating in the TEM-00 mode. A simple polynomial equation is derived for matching the incident source distribution to a general antenna configuration for maximum on-axis gain. An interpretation of the resultant gain curves allows a number of auxiliary design curves to be drawn that display the losses in antenna gain due to pointing errors and the cone angle of the beam in the far field as a function of antenna aperture size and its central obscuration. The results are presented in a series of graphs that allow the rapid and accurate evaluation of the antenna gain which may then be substituted into the conventional range equation.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
NASA Astrophysics Data System (ADS)
Wang, F.; Ren, X.; Liu, J.; Li, C.
2012-12-01
An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.